Category Archives: Advice

Funders sometimes force grantees to provide services they don’t want to: FQHCs and Medication Assisted Treatment (MAT)

We often remind clients that those with the gold make the rules. Accepting a government grant means the applicant must sign a grant agreement, in which the applicant agrees not only to provide wherever services were specified in the proposal, but also abide by a myriad of regulations and laws. While many applicants will tussle with a funder over the budget, there’s rarely any point in trying to modify the boiler plate agreement—just like one can’t modify Apple or Facebook’s Terms of Service.

In addition to the specific terms of the grant agreement, grantees quickly become subject to other influences from the funder—when the Godfather makes you an offer you can’t refuse, you know that eventually you’ll be told to do something you’d otherwise not much want to do. While a federal agency is unlikely to place a horse’s head in a nonprofit Executive Director’s bed, the grantee might end up having to provide an unpalatable service.

A case in point is HRSA’s relatively recent (and divisive) endorsement of Medication Assisted Treatment (MAT) for treating opioid use disorder (OUD). Since HRSA is the primary FQHC funder, it is essentially their Godfather and has great influence over FQHCs. In the past few years, HRSA has strongly encouraged FQHCs to provide MAT. The CEOs of our FQHC clients have told us about HRSA pressure to start offering MAT. It seems that, even after several years of cajoling, only about half of our FQHC clients provide MAT, and, for many of these, MAT is only nominally offered. Other clients see offering MAT as a moral imperative, and we’ll sometimes get off the phone with one client who hates MAT and then on the phone with another client who sees not providing MAT as cruel.

“MAT” generically refers to the use of medications, usually in combination with counseling and behavioral therapies, for the treatment of substance use disorders (SUD). For OUD, this usually means prescribing and monitoring a medication like Suboxone, in which the active ingredients are buprenorphine and naloxone. While Suboxone typically reduces the cravings of people with OUD for prescribed and street opioids (e.g., oxycontin, heroin, etc.), it is itself a synthetic opioid. While MAT replaces a “bad opioid” with a “good opioid,” the patient remains addicted. Many FQHC managers and clinicians object to offering MAT for OUD, for a variety of medical, ethical, and practical reasons:

  • Like its older cousin methadone, as an opioid, Suboxone can produce euphoria and induce dependency, although its effects are milder. Still, it’s possible to overdose on Suboxone, particularly when combined with alcohol and street drugs. So it can still be deadly.
  • While MAT is supposed to be combined with some form of talking or other therapy, few FQHCs have the resources to actually provide extensive individual or group therapy, so the reality is that FQHC MAT patients will likely need Suboxone prescribed over the long term, leaving them effectively addicted. We’re aware that there’s often a wide gap here between the real world and the proposal world.
  • Unless it’s combined with some kind talking therapy that proves effective, MAT is not a short-term approach, meaning that, once an FQHC physician starts a patient on Suboxone, the patient is likely to need the prescription over a very long time—perhaps for the rest of their life. This makes the patient not only dependent on Suboxone, but also dependent on the prescriber and the FQHC, since few other local providers are likely to accept the patient and have clinicians who have obtained the necessary waiver to prescribe it. Suboxone users must be regularly monitored and seen by their prescriber, making for frequent health center visits.
  • As noted above, prescribed Suboxone can, and is often, re-sold by patients on the street.
  • Lastly, but perhaps most importantly, most FQHC health centers prefer to look like a standard group practice facility with a single waiting room/reception area. Unlike a specialized methadone or other addiction clinic, FQHC patients of all kinds are jumbled together. That means a mom bringing her five-year old in for a school physical could end up sitting between a couple of MAT users, who may look a little wild-eyed and ragged, making her and her kid uncomfortable. Since FQHCs usually lack the resources for anything beyond minor paint-up/fix up repairs, there is simply no way around this potential conflict.

Given the above, many FQHC CEOs remain resistant to adding the challenges of MAT to the many struggles they already face. Still, the ongoing pressure from HRSA means that most FQHCs will eventually be forced to provide at least a nominal MAT program to keep their HRSA Program Officer at bay. The tension between a typical mom and her five-year old against a full-fledged behavioral and mental health program is likely to remain, however. Before you leave scorching comments, however, remember that we’re trying to describe some of the real-world trade-offs here, not prescribe a course of action. What people really want in the physical space they occupy and what they say they want in the abstract are often quite different. You can see this in the relentless noise around issues like homeless service centers; everyone is in favor of them in someone else’s neighborhood and against them in their own neighborhood. Always pay attention to what a person actually does over a person’s rhetoric.

Don’t split target areas, but some programs, like HRSA’s Rural Health Network Development (RHND) Program, encourage cherry picking

In developing a grant proposal, one of the first issues is choosing the target area (or area of focus); the needs assessment is a key component of most grant proposals—but you can’t write the needs assessment without defining the target area. Without a target area, it’s not possible to craft data into the logic argument at is at the center of all needs assessments.

To make the needs assessment as tight and compelling as possible, we recommend that the target area be contiguous, if at all possible. Still, there are times when it is a good idea to split target areas—or it’s even required by the RFP.

Some federal programs, like YouthBuild, have highly structured, specific data requirements for such items as poverty level, high school graduation rate, youth unemployment rates, etc., with minimum thresholds for getting a certain number of points. Programs like YouthBuild mean that cherry picking zip codes or Census tracts can lead to a higher threshold score.

Many federal grant programs are aimed at “rural” target areas, although different federal agencies may use different definitions of what constitutes “rural”—or they provide little guidance as to what “rural” means. For example, HRSA just issued the FY ’20 NOFOs (Notice of Funding Opportunities—HRSA-speak for RFP) for the Rural Health Network Development Planning Program and the Rural Health Network Development Program.

Applicants for RHNDP and RHND must be a “Rural Health Network Development Program.” But, “If the applicant organization’s headquarters are located in a metropolitan or urban county, that also serves or has branches in a non-metropolitan or rural county, the applicant organization is not eligible solely because of the rural areas they serve, and must meet all other eligibility requirements.” Say what? And, applicants must also use the HRSA Tool to determine rural eligibility, based on “county or street address.” This being a HRSA tool, what HRSA thinks is rural may not match what anybody living there thinks. Residents of what has historically been a farm-trade small town might be surprised to learn that HRSA thinks they’re city folks, because the county seat population is slightly above a certain threshold, or expanding ex-urban development has been close enough to skew datasets from rural to nominally suburban or even urban.

Thus, while a contiguous target area is preferred, for NHNDP and RHND, you may find yourself in the data orchard picking cherries.

In most other cases, always try to avoid describing a target composed of the Towering Oaks neighborhood on the west side of Owatonna and the Scrubby Pines neighborhood on the east side, separated by the newly gentrified downtown in between. If you have a split target area, the needs assessment is going to be unnecessarily complex and may confuse the grant reviewers. You’ll find yourself writing something like, “the 2017 flood devastated the west side, which is very low-income community of color, while the Twinkie factory has brought new jobs to the east side, which is a white, working class neighborhood.” The data tables will be hard to structure and even harder to summarize in a way that makes it seem like the end of the world (always the goal in writing needs assessments).

Try to choose target area boundaries that conform to Census designations (e.g., Census tracts, Zip Codes, cities, etc.). Avoid target area boundaries like a school district enrollment area or a health district, which generally don’t conform to Census and other common data sets.

Foundation and government grant applicants: It’s “Hell yes” or “No.”

Derek Sivers has a rule for many things:

No ‘yes.’ Either ‘HELL YEAH!’ or ‘no.’” He says, “When deciding whether to do something, if you feel anything less than ‘Wow! That would be amazing! Absolutely! Hell yeah!’ — then say ‘no.’

That principle applies to other fields: are you going to get the job? If the employer really wants you, they are going to be very “hell yes,” and they are going to start courting you. With any reply other than “hell yes,” keep looking. Don’t stop looking till the contract is signed—and don’t be surprised when the employer is a whole lot more excited about you the day after you sign up with another outfit. Same is true in dating: don’t stop lining up leads unless and until that special person says HELL YES! This is also true in applying for most grant funding: assume it’s a “no” until proven otherwise.

We’ve had lots of clients over the years who have been encouraged by foundations that are eager to cultivate applications but seem decidedly less eager to actually cut the check (CTC). Talk is cheap, but the CTC moment has real costs—in pro hoops and grant seeking. Foundations are prone to delaying that magic moment, if possible. Foundations, like many of us, like the flattery and attention that comes with dangling cash in front of people who desire said cash. Note that I’m not arguing this behavior is fair or appropriate—just that it’s common. Foundation officers seemingly enjoy the flattery that comes with nonprofits’s seduction attempts.

To a lesser extent, some government funders at the federal, state, and local level also engage in the dangling CTC approach, but government rules often discourage excess promises from government officers to applicants. If your agency has applied for a government grant, you’re unlikely to hear anything until you get the hell yes email (notice of grant award) or the “thanks for a lovely evening” email (thanks, but no grant this time around). Still, if a funder, government or foundation, requests more information about your proposed budget or asks if you’ll accept a smaller grant, you’ll almost always eventually get the desired response. Few funders will bother with info requests unless they are likely to fund you.

As a rule, though, your default assumption should be that the funder is not going to fund you until they want to fund you. This is a special case of the Golden Rule. Your assumption should be “no deal:” don’t waste time anticipating a promised deal that may not happen. Spend that energy improving your services and pursuing other funding opportunities. Many foundations also like giving out the last check to make the project happen, rather than the first one, so keep chasing early grants—even small ones.

Another piece of the evaluation puzzle: Why do experiments make people unhappy?

The more time you spend around grants, grant writing, nonprofits, public agencies, and funders, the more apparent it becomes that the “evaluation” section of most proposals is only barely separate in genre from mythology and folktales, yet most grant RFPs include requests for evaluations that are, if not outright bogus, then at least improbable—they’re not going to happen in the real world. We’ve written quite a bit on this subject, for two reasons: one is my own intellectual curiosity, but the second is for clients who worry that funders want a real-deal, full-on, intellectually and epistemologically rigorous evaluation (hint: they don’t).

That’s the wind-up to “Why Do Experiments Make People Uneasy?“, Alex Tabarrok’s post on a paper about how “Meyer et al. show in a series of 16 tests that unease with experiments is replicable and general.” Tabarrok calls the paper “important and sad,” and I agree, but the paper also reveals an important (and previously implicit) point about evaluation proposal sections for nonprofit and public agencies: funders don’t care about real evaluations because a real evaluation will probably make the applicant, the funder, and the general public uneasy. Not only do they make people uneasy, but most people don’t even understand how a real evaluation works in a human-services organization, how to collect data, what a randomized controlled trial is, and so on.

There’s an analogous situation in medicine; I’ve spent a lot of time around doctors who are friends, and I’d love to tell some specific stories,* but I’ll say that while everyone is nominally in favor of “evidence-based medicine” as an abstract idea, most of those who superficially favor it don’t really understand what it means, how to do it, or how to make major changes based on evidence. It’s often an empty buzzword, like “best practices” or “patient-centered care.”

In many nonprofit and public agencies, evaluations and effectiveness are the same: everyone putatively believes in them, but almost no one understands them or wants real evaluations conducted. Plus, beyond that epistemic problem, even if evaluations are effective in a given circumstance (they’re usually not), they don’t necessarily transfer. If you’re curious about why, Experimental Conversations: Perspectives on Randomized Trials in Development Economics is a good place to start—and this is the book least likely to be read, out of all the books I’ve ever recommended here. Normal people like reading 50 Shades of Grey and The Name of the Rose, not Experimental Conversations.

In the meantime, some funders have gotten word about RCTs. For example, the Department of Justice’s (DOJ) Bureau of Justice Assistance’s (BJA) Second Chance Act RFPs have bonus points in them for RCTs. I’ll be astounded if more than a handful of applicants even attempt a real RCT—for one thing, there’s not enough money available to conduct a rigorous RCT, which typically requires paying the control group to follow up for long-term tracking. Whoever put the RCT in this RFP probably wasn’t thinking about that real-world issue.

It’s easy to imagine a world in which donors and funders demand real, true, and rigorous evaluations. But they don’t. Donors mostly want to feel warm fuzzies and the status that comes from being fawned over—and I approve those things too, by the way, as they make the world go round. Government funders mostly want to make congress feel good, while cultivating an aura of sanctity and kindness. The number of funders who will make nonprofit funding contingent on true evaluations is small, and the number willing to pay for true evaluations is smaller still. And that’s why we get the system we get. The mistake some nonprofits make is thinking that the evaluation sections of proposals are for real. They’re not. They’re almost pure proposal world.


* The stories are juicy and also not flattering to some of the residency and department heads involved.

“How Jeff Bezos Turned Narrative into Amazon’s Competitive Advantage”

How Jeff Bezos Turned Narrative into Amazon’s Competitive Advantage” should be mandatory reading for anyone in nonprofit and public agencies, because narrative is probably more important for nonprofits than conventional businesses; conventional businesses can succeed by pointing to product-market fit, but nonprofits typically don’t have that metric. Nonprofits have to get their stories out in other ways than profit-loss statements or sales.

Bezos is Amazon’s chief writing evangelist, and his advocacy for the art of long-form writing as a motivational tool and idea-generation technique has been ordering how people think and work at Amazon for the last two decades—most importantly, in how the company creates new ideas, how it shares them, and how it gets support for them from the wider world.

New ideas often emerge from writing—virtually everyone who has ever written anything substantive understands this, yet it remains misunderstood among non-writers. Want to generate new ideas? Require writing. And no, “Powerpoint” does not count:

“The reason writing a good 4 page memo is harder than ‘writing’ a 20 page powerpoint is because the narrative structure of a good memo forces better thought and better understanding of what’s more important than what, and how things are related,” he writes, “Powerpoint-style presentations somehow give permission to gloss over ideas, flatten out any sense of relative importance, and ignore the interconnectedness of ideas.”

I’m not totally anti-Powerpoint—I have seen books about how to do it well—but Powerpoint does not substitute for narrative (in most cases). Most people doing Powerpoint have not read Edward Tufte or adequately thought through their rationale for choosing Powerpoint over some other communications genre, like the memo. The other day I did an online grant-writing training session for the state of California for 400 people, and the guy organizing it expected me to do a Powerpoint. I said that using a Powerpoint presentation to teach writing is largely useless (he seemed surprised). Instead, I did a screencast, using a text editor as my main window, in which I solicited project ideas and RFPs germane to the viewers. I picked a couple and began working through the major parts of a typical proposal, showing how I would construct an abstract using the 5Ws and H, and then how I would use those answers to begin fleshing out typical narrative sections in the proposal. Because it was screencast, participants can re-watch sections they find useful. I think having a text document and working with actual sentences is much closer to the real writing process than babbling on about a prepared set of slides with bullet points. The talk was less polished than it would have been if I’d prepared it in advance, but writing is inherently messy and I wanted to deliberately show its messiness. There is no way to avoid this messiness; it’s part of the writing process on a perceptual level. It seems linked to speech and to consciousness itself.

To return to the written narrative point, written narrative also allows the correct tension between individual creativity and group feedback, in a way that brainstorming sessions don’t, as the article explains. Most human endeavors involving group activity require some tension between the individual acting and thinking alone versus being part of a pair or larger group acting in concert. If you are always alone, you lose the advantage of another mind at work. If you are always in a group, you lack the solitude necessary for thinking and never get other people out of your head. Ideal environments typically include some “closed door” space and some “open door” serendipitous interaction. Written narrative usually allows for both.

Nonprofit and public agencies that can’t or won’t produce coherent written documents are not going to be as successful as those that do. They aren’t going to ensure key stakeholders understand their purpose and they’re not going to be able to execute as effectively. That’s not just true in grant writing terms; it’s true in organizational terms. Reading remains at present, the fastest way to transmit information. If you’re not hiring people who can produce good stuff for reading, you’re not effectively generating and using information within your organization.

Coding school is becoming everyone’s favorite form of job training

For many years, construction skills training (often but not always in the form of YouthBuild) was every funder’s and every nonprofit’s favorite form of job training, often supplemented by entry-level healthcare work, but today the skill de jour has switched to software, programming, and/or coding. Case in point: this NYT article with the seductive headline, “Income Before: $18,000. After: $85,000. Does Tiny Nonprofit Pursuit Hold a Key to the Middle Class?” While the article is overwhelming positive, it’s not clear how many people are going to make it through Pursuit-like programs: “Max Rosado heard about the Pursuit program from a friend. Intrigued, he filled out an online form, and made it through a written test in math and logic…” (emphasis added). In addition, “Pursuit, by design, seeks people with the ‘highest need’ and potential, but it is selective, accepting only 10 percent of its applicants.” So the organization is cherry-picking its participants.

There’s nothing wrong with cherry-picking participants and most social and human service programs do just that, in the real world. As grant writers who live in the proposal world, we always state in job training proposals that the applicant (our client) will never cherry-pick trainees, even though they do. In the article, important details about cherry-picking are stuck in the middle, below the tantalizing lead, so most people will miss them. I’m highlighting them because they bring to the fore an important fact in many social and human service programs: there is a tension between access and success. Truly open-access programs tend to have much lower success rates; if everyone can enter, many of those who do will not have the skills or conscientiousness necessary to succeed. If an organization cherry-picks applicants, like Pursuit does, it will generally get better success metrics, but at the cost of selectivity.

Most well-marketed schools succeed in “improving” their students primarily through selection effects. That’s why the college-bribery scandal is so comedic: no one involved is worried about their kid flunking out of school. Schools are extremely selective in admissions and not so selective in curriculum or grading. Studies have consistently suggested that where you go to school matters much less than who you are and what you learn. Such studies don’t stop people from treating degrees as status markers and consumption goods, but it does imply that highly priced schools are often not worth it. Thorstein Veblen tells us a lot more about the current market for “competitive” education than anyone else.

My digs at well-marketed schools are not gratuitous to the main point: I favor Pursuit and Pursuit-like organizations and we have worked for some of them. In addition, it’s clear to pretty much anyone who has spent time teaching in non-elite schools that the way the current post-secondary education system is set up is nuts and makes little sense; we need a wider array of ways for people to learn the skills they need to thrive. If Pursuit and Pursuit-like programs are going to yield those skills, we should work towards supporting more of them.

It is almost certainly not existing schools that are going to boost more people into the middle class, as they’ve become overly bureaucratic, complacent, and sclerotic; see also Bryan Caplan’s book The Case Against Education on this subject. While many individuals within those systems may want change, they cannot align all the stakeholders to create change from within. Some schools, especially in the community-college sector, are re-making themselves, but many are not. In the face of slowness, however, nimble nonprofits and businesses should move where this grant wave is going.

The movement towards a $15 minimum hourly wage and the Pre-K For All program in NYC


Over the last few years, the highly marketed $15/hour minimum wage has had remarkable success: it, along with the recent economic boom and historically low unemployment rates, have increased wages for some unskilled/low skill workers in some areas. Last week, though, I was developing a budget for a federal grant proposal on behalf of a large nonprofit in NYC. The federal program requires the use of “Parent Mentors”, which is another way of saying “Peer Outreach Worker.” So two full-time equivalent (FTE) Parent Mentors went into the budget.

“Peer” staff are not professionals—college degrees or formal work experience aren’t typically required. Instead, the peer is supposed to have life experience similar to the target population (e.g., African American persons in recovery for a substance abuse disorder treatment project in an African American neighborhood) or street credentials (“street cred”) to relate to the target population (e.g., ex-gang-bangers to engage current gang-bangers). In most human services programs, the peer staff are supervised by a professional staff person with a BA, MSW, LCSW, or similar degree. While the peer staff are at the bottom of the org chart, in many cases, they’re much more important to getting funded and operating a successful program than the 24-year-old recent Columbia grad with a degree in urban studies or psychology, as the “supervisor” is often afraid to go out into the community without a peer staff person riding shotgun. The situation is analogous to a first-year military officer who is technically superior to a 15-year enlisted veteran sergeant.

There are 2,080 person hours in a person year, so, at $15/hour, one FTE peer worker is budgeted at $31,200/year. If a nonprofit operates in an area with a $15/hour minimum wage, that’s the lowest salary that can be legally proposed. For many nonprofits, actual salaries for entry-level professional staff are about $30,000 to $35,000 per year. One might say, “No problem, just raise the professional salaries to $40,000.” This is, however, not easily done, as the maximum grants for most federal and state programs have not been adjusted to reflect minimum wages in places like New York or Seattle. If the nonprofit has been running a grant-funded program for five years, they’ve probably been paying the peer workers around $10/hour, and the new RFP very likely has the same maximum grant—say, $200,000—as the one from five years ago. That means one-third fewer peer workers.

If a Dairy Queen (I’m quite fond of DQ, like Warren Buffet) is suddenly confronted by the much higher minimum wage, they can try making the Blizzards one ounce smaller, skipping the pickles on the DQ Burgers, or buying a Flippy Burger Robot, and laying off a couple of 17-year olds. Nonprofits can’t generally deploy any of these strategies, as the service targets in the RPF are the the same as they ever were. For “capitated programs” like foster care, the nonprofit has to absorb rising costs, because they have a fixed reimbursement from the funder (e.g., $1,000/month/foster kid to cover all program expenses); we’re also unlikely to see robot outreach workers any time soon.

Most nonprofits also depend to some extent on fundraisers and donations. It’s hard enough to extract coin from your board and volunteers, so having a “New Minimum Wage Gala” is not likely to be a winning approach. Some higher-end restaurants in LA have added surcharges for higher minimum wages and employee health insurance, a practice I find annoying (just raise the damn pasta price from $20 to $22 and stop trying to virtue signal—or make me feel guilty). That avenue is typically closed to nonprofits, because the whole point is to provide no-cost services, or, in cases like Boys and Girls Clubs, very low-cost fees ($20 to play in the basketball league). Some organizations charge nominal membership fees, which are often waived anyway.

The nonprofit and grant worlds move much slower than the business world, and I guess we’ll just have to wait for the funders to catch up with rising minimum wages. In the meantime, some nonprofits are going to go under, just like this US News and World Report article that reports, “76.5 percent of full-service restaurant respondents said they had to reduce employee hours and 36 percent said they eliminated jobs in 2018 in response to the mandated wage increase” in New York City. More grants will also likely end up going to lower-cost cities and states, where it’s possible to hire three outreach workers instead of two outreach workers.

We write lots of Universal Pre-K (UPK) and Pre-K For All proposals in NYC and few, if any, of our early childhood education clients over the years have paid their “teachers” or “assistant teachers”—who are mostly peer workers with at most a 12-week certificate—$15/hour. There’s a new NYC Pre-K For All RFP on the street, and, if we’re hired to write any this year, the budgeting process will be interesting, as the City has minimum staffing levels for these classrooms, so staff cannot be cut.

Some organizations will get around the rules. Many religious communities are already “familiar,” you might say, with ways of getting around conventional taxation and regulatory rules. Their unusual social bonds enable them to do things other organizations can’t do. Many religious communities also vote as blocks and consequently get special dispensation in local and state grants and contracts. We’ll also likely end up seeing strategies like offering “stipends” to “parent volunteers” to get around the “wage” problem. For most nonprofits in high-minimum-wage areas, however, the simple reality is that fewer services will be provided per dollar spent.

More on developing federal grant budgets: Stay in the proposal world, not the operations world

This is an update to our popular post “Seliger’s Quick Guide to Developing Federal Grant Budgets.” While that post provides a step-by-step description of how to develop a federal grant proposal budget, it assumes that the budget preparer understands the difference between the real world and the proposal world. Experts in real-world budgets are often too sophisticated for the proposal world.

When we’re hired to complete a federal proposal, we send our client an Excel template that models the SF-424 budget form found in all grants.gov WorkSpace applications. Recently, we’ve been working for a series of large nonprofits and public agencies that have skilled Chief Financial Officers (CFOs). Most of these CFOs, however, have little or no understanding of proposal budgeting, as they’re accustomed to detailed operational budgets. Yet they’re often charged with filling out a proposal budget.

Even if we discuss the proposal world with the CFO first, the completed template we receive back is usually way too detailed, because it reflects actual program operations, not the idealized proposal world. This not only makes preparing the associated budget narrative/justification far too complex, but also means the budget presentation won’t display well when saved as the required .pdf for attachment to the kit file. The budget will also confuse proposal reviewers (which is never a good idea while being very easy to do), as most of them are not accountants, CFOs, etc.

So how do you keep your budget anchored in the proposal world?

  • Keep the number the number of line items short—around, say, 20. If you use 40 line items, the spreadsheet bloat will be very difficult to format in a way that is readable and meets RFP formatting requirements (unless you’re a wiz at Excel, which almost no one—including us and the CFOs we encounter—is).
  • Only include staff and line items that will be charged to the grant (and match, if required).
  • Personnel line items must match the staffing plan in the narrative. Resist the urge to load up the budget with small FTEs (2% to 20%) of lots of existing administrators/managers, as this will make your agency look bureaucratic (not a good idea, even if it is) and clog the budget narrative. Large numbers of small FTEs are what a federally approved Indirect Cost Rate is for. If your agency has at least one existing federal grant, get an approved Indirect Cost Rate, which is not that difficult, and many of your proposal budgeting woes will be solved.
  • Unless the RFP requires it, don’t line-item fringe benefits. These can usually be lumped together as a the percent of salaries your fringe benefit package equates to. For most nonprofits, this will be in the 18% to 30% range. Anything above 30% will probably generate unwanted attention from grant reviewers, even if that is what you pay. If the fringe benefit rate is relatively high, this should be explained in the budget narrative (e.g. lower salaries, high local costs, need to retain staff, etc.).
  • For multi-year budgets, don’t include expected yearly salary increases or annual inflators; this is too detailed and will, again, result in a very complicated budget justification. Inflation in the current environment is low. In a high-inflation environment like the ’70s, this advice would be different.
  • Regarding the “Other” Object Cost Category on the SF-424A, it’s unnecessary to break down line items too far. For example, lump together facility costs (e.g., rent, utilities, security, janitorial, maintenance, etc.), or communications (e.g., landline and cell phones, mailings, etc.) into single line items. Try to consolidate.
  • If feasible, try to make the total annual budget level for each project year. This can be a bit challenging, if, for example, the project involves start-up costs (e.g., buying staff furniture, hiring a web designer/social media consultant, etc.) in year one. The way to do this is to increase some other line item(s) in the out years to keep the budget level. Level annual budgets will make the budget easier to write and understand.
  • Make one line item your plug number to enable reconciliation to the maximum allowed grant and/or level annual amounts in multi-year grants. The plug number should be in the Other Object Cost Category and could be advertising, communications, or similar line items that look OK with an odd number in different years. Reviewers are aware of plug numbers and won’t hold reasonable plug numbers against you.

The proposal budget is just a financial plan that supports the proposed project activities, not a detailed expression of an operational situation. Following the notice of grant award, your agency will have to negotiate the actual budget in the contract anyway.

In most cases, the grantee can move 10% of the total grant among line items by notifying the federal program officer or requesting larger budget changes to reflect operations in the real world as the project is implemented. Unless you ask to swap an Outreach Worker for a lease on a Tesla for the Executive Director, the program officer will likely go along with your plan, as most simply don’t care what you do so long as the grant doesn’t end up in BuzzFeed, Politico, or the New York Times. Program officers want to make sure you are reasonable implementing a proposed project, but they don’t care about relatively small changes in operations-level detail. Fighting over small details in a proposal budget is a foolish thing to do, as is including small line items. Get the big picture right and the details will shake out during implementation.

When you hire consultants, you’re hiring them for all the mistakes they’ve ever seen (and made)

When you hire a lawyer, part of who you’re hiring is someone who has made thousands of mistakes in law school and as a young lawyer. Lawyers, like doctors and other professionals, learn in an apprentice-style system that incorporates the mistakes made by their mentors. Proto-lawyers also make some mistakes of their own—and, ideally, have those mistakes corrected by senior lawyers, and learn to not make those mistakes in the future. Most people don’t think about hiring a person or team specifically for their mistakes, yet this is a useful way to think about most professional services, including our personal favorite: grant writing consultants.

When you’re hiring a grant writer, you’re really hiring the experience that grant writer has. It isn’t impossible to hire a college intern or recent journalism grad and get funded; we’ve seen it happen and heard stories from clients. But the intern and inexperienced writers will make mistakes more experienced people won’t. We’ve written numerous posts about subtle mistakes that are easy to make in all aspects of the grant pipeline, from the needs assessment to the program design to the submission process. It’s also possible to get a competent junior person to write a couple of proposals, but grant writing is very hard and over time they tend to demand more money—or leave. That’s why you have trouble hiring grant writers. Many interns will write a proposal or two, but when they learn how hard and under-appreciated the job is, they often want money commensurate with difficulty. The inexperienced tend to make mistakes; the experienced grant writers tend to charge accordingly.

We are still not perfect (no one is; if anyone think they are, refer to “the perils of perfectionism“). But we have learned, through trial and error, how to make many fewer mistakes than novice or somewhat experienced grant writers. It’s not conceptually possible to eliminate all errors, but it is possible to avoid many errors that scupper most would-be grant writers.

If your organization can get a recent English major to write successful proposals for little or nothing, you should do that. But we’ve also heard from a lot of organizations that have “whoever is around” write, or attempt to write, their proposals, only to fail. Experience matters. You can get the magic intern, but more often you get someone who is overwhelmed by the complexity of a given writing assignment, who doesn’t understand human services or technical projects, is simply terrified by absolute deadlines, etc.

Let’s take as an example a common error that we’ve seen in a spate of recent old proposals provided by clients. Most include some variation, made by inexperienced writers, who want to write that “we are wonderful,” “we really care,” and the like in their proposals. This is a violation of the writing principle “Show, don’t tell.” Most of the time, you don’t want to tell people you’re wonderful—you want to show them that you are. “We are wonderful” statements are empty. “We served 500 youth with ten hours of service per week, and those services include x, y, and z” statements have objective content. It’s also harder to accurately describe what specific services an organization is providing than it is to say subjectively, “We are wonderful and we care.” Whatever is rare is more valuable than that which is common.

The above paragraph is just one example of the kind of errors novices make that experts tend not to. Attempting to enumerate all errors would be book-length if not longer. Experienced grant writers will avoid errors and offer quality almost instinctively, without always being able to articulate every aspect of error vs. optimality.

A description of scientific and technical grant writing, found in an unexpected place

I have rarely, if ever, seen an explicit, reasonably accurate description of the grant application process in a general nonfiction book—until this weekend, when I was reading Steve Levine’s book The Powerhouse: America, China, and the Great Battery War. It’s an excellent and highly recommended book in general, but it also has a passage detailing Argonne National Laboratory’s efforts to become a major battery research and development hub. Becoming an R&D hub required a response to a complex federal Funding Opportunity Announcement (FOA, which is another term for RFP). Levine characterizes the process this way:

The assessment [of Argonne’s proposal] lasted two days. Madia was harsh. Argonne’s vision—its “story”—did not shine through. The narrative was “buried deep in the science.” The scientific sections were adequate as far as they went, but the team’s priority was to craft the story so that a nonexpert—like members of the judging team who were not battery guys—could understand it. As the proposal stood, it failed to meet this standard.

More problematically, the proposal seemed actually to ignore some provisions in the FOA. The FOA had stipulated a serious commitment to applied science. Madia judged the appropriate balance at about 60 percent research and the rest development and deployment. The Argonne team had proposed an 80 percent emphasis on basic research—clearly too much.

He raised a couple of other points—there were too many “whats” and not enough “hows”; each time the proposal said the team intended to do something, it should provide an example of how it would be done. Madia was troubled. The previous summer, he had seen a preliminary draft and said much the same.

In the version of the story told by Levine, however, Madia doesn’t fix the proposal himself, which isn’t very helpful. Nonetheless, Grant Writing Confidential readers should notice many points we regularly make in posts like “Proposals need to tell stories.” And scientific and technical proposals are not exempt from this rule. In addition, he who has the gold, makes the rule. You must follow the FOA guidelines, or follow them as best you can.

The “Follow the FOA guidelines” rule has been much on our minds in the last several weeks because we’ve been working on Department of Education Early Head Start (EHS) applications, and the EHS RFP contains this instruction: “Applicants must prepare the project description statement in accordance with the following instructions while being aware of the specified evaluation criteria in Section V.1. Criteria.” In other words, the Dept. of Education put the mandatory headers in one section, starting on page 33 of the RFP, but also included other required material under the “Application Review Information” on page 54. The two sections don’t match, either. The evaluation criteria says, for example, “Evidence of community engagement in the proposed geographic locations that is designed to improve service delivery, increase access to services, and prevent duplication,” but the instructions on page 33 omit that. A would-be applicant who only attends to page 33 will miss the vital material on page 54.

The Argonne grant-writing team evidently faced a similar problem, as shown by the misplaced balance between basic and applied research. The cited difference between the “what” and the “how” is more interesting, though. Some technical proposals don’t have a lot of “how” in them because the proposer doesn’t know how the task will be achieved. If the proposer already understands how the task can be achieved, he sometimes doesn’t need the money—because the problem has already been solved. If we already know how to do something, it’s not research, it’s implementation.

Sometimes, though, the “how” is fairly well-known. A few months ago, we finished working on a technical proposal relating to alternative energy technology R & D project, and the Department of Energy has funded the application. The “How” on that application appeared to be fairly clear. I want to explain what made it clear, but I can’t really do so without giving away too much of the client’s domain expertise.

Overall, Levine’s Argonne story demonstrates why many people choose to specialize in specific fields, then hire experts in fields not their own. The scientific luminaries and visionaries at Argonne haven’t specialized in the grant writing process. They may be working on battery breakthroughs essential to the future of the world, but knowing how to conduct research and knowing how to explain the research to the rest of the world are different skills.