Tag Archives: RFPs

Confusing NIH and other Small Business Innovation and Research (SBIR) application guidance

In theory, an “application guide” for a Small Business Innovation and Research (SBIR) grant from a federal agency is meant to make the application process easier: the applicant should presumably be able to read the application guide and follow it, right? Wrong, as it turns out. The difficulties start with finding the application guide and associated RFP (or “FOA,” Funding Opportunity Announcement in NIH-land) . If you go to grants.gov today, Sept. 9, dear reader, and search for “SBIR,” you’ll get 74 matching results—most for National Institutes of Health (NIH) programs, which we’ll use as an example for the sake of this exercise, and because I worked on one recently. I’m going to use “PA-18-705 SBIR Technology Transfer (R43/R44 Clinical Trial Not Allowed)” program, which has download instructions at Grants.gov. When you download and review the “instructions,” however, you’ll find this complication:

It is critical that applicants follow the SBIR/STTR (B) Instructions in the SF424 (R&R) SBIR/STTR Application Guide (//grants.nih.gov/grants/guide/url_redirect.htm?id=32000)except where instructed to do otherwise (in this FOA or in a Notice from the NIH Guide for Grants and Contracts (//grants.nih.gov/grants/guide/)). Conformance to all requirements (both in the Application Guide and the FOA) is required and strictly enforced.

Notice that the URLs in the quoted section are incomplete: it’s up the applicant to track down the true SBIR application guide and correct FOA. I did that, but the tricky phrase is “follow the SBIR/STTR (B) Instructions […] except where instructed to do otherwise.” For the particular NIH application we were working on, the FOA and the Application Guide disagreed with each other concerning how the narrative should be structured and what an applicant needed to include in their proposal. So what’s an applicant, or, in this case, a hired-gun grant writer, to do? With some SBIRs, there is no canonical set of questions and responses: there’s the “general” set of questions and the FOA-specific set, with no instructions about how reconcile them.

To solve this conundrum, I decided to develop a hybridized version for the proposal structure: I used the general narrative structuring questions from the application guide, and I tacked on any extra questions that I could discern in the program-specific FOA. The only plausible alternative to this hybridized approach would have been to contact the NIH program officer listed in the FOA. As an experienced grant writer, however, I didn’t reach out, because I know that program officers confronted with issues like this will respond with a version of “That’s an interesting question. Read the FOA.”

The challenge of multiple, conflicting SBIR guidance documents isn’t exclusive to the NIH: we’ve worked on Dept. of Energy (DOE) SBIRs that feature contradictory guides, FOAs/RFPs, and related documents. It takes a lot of double checking and cross checking to try to make sure nothing’s been missed. The real question is why inherently science-based agencies like NIH and DOE are seemingly incapable of producing the same kind of single RFP documents typically used by DHHS, DOL, etc. Also, it’s very odd that we’ve never worked on an SBIR proposal for which the federal agency has provided a budget template in Excel. In the NIH example discussed above, the budget form was in Acrobat, which means I had to model it in Excel. Excel has been the standard for spreadsheets/budgets since the ’80s.

We (obviously) work on grant applications all the time, and yet the SBIR reconciliation process is confusing and difficult even for us professional grant writers. The SBIR narratives, once we understand how to structure them, usually aren’t very challenging for us to write, but getting to the right structure sure is. For someone not used to reading complicated grant documents, and looking at SBIR guidance documents for the first time, the process would be a nightmare. Making SBIRs “easier” with extra, generic application guides that can be unpredictably superseded actually makes the process harder. This is good for our business but bad for science and innovation.

Why we like writing SAMHSA proposals: the RFP structure is clear and never changes

We wrote our first funded Substance Abuse and Mental Health Administration (SAMHSA) grant about 25 years ago, and there’s something notable about SAMHSA: unlike virtually all of their federal agency sisters, SAMHSA RFPs are well structured. Even better, the RFP structure seemingly never changes—or at least not for the past quarter century. This makes drafting a SAMHSA proposal refreshingly straightforward and enables us, and other competent writers, to (relatively) easily and coherently spin our grant writing “Tales of Brave Ulysses.” The word “coherently” in the preceding sentence is important: RFPs that destroy narrative flow by asking dozens of unrelated sub-questions also destroy the coherence of the story the writer is trying to tell and the program the writer is trying to describe. SAMHSA RFPs typically allow the applicant to answer the 5Ws and H.

A SAMHSA RFP almost always uses a variation on a basic, five element structure:

  • Section A: Population of Focus and Statement of Need
  • Section B: Proposed Implementation Approach
  • Section C: Proposed Evidence-Based Service/Practice
  • Section D: Staff and Organizational Experience
  • Section E: Data Collection and Performance Measurement

While SAMHSA RFPs, of course, include many required sub-headers that demand corresponding details, this structure lends itself to the standard outline format that we prefer (e.g., I.A.1.a). We like using outlines, because it makes it easy for us to organize our presentation and for reviewers to find responses to specific items requested in the RFP—as long as the outlines make sense and, as noted above, don’t interrupt narrative flow. In this respect, SAMHSA RFPs are easy for us to work with.

In recent years, SAMSHA has also reduced the maximum proposal length (exclusive of many required attachments) from 25 single-spaced pages to, in many cases, 10 single-spaced pages. Although it’s generally harder to write about complex subjects with a severe page limit than a much longer page limit, we’re good at packing a lot into a small space.* A novice grant writer, however, is likely to be intimidated by a SAMHSA RFP, due to the forbidding nature of the typical project concept and the brief page limit. In our experience, very long proposals are rarely better and are often worse than shorter ones.

We haven’t talked in this post about what SAMHSA does, because the nature of the organization’s mission doesn’t necessarily affect the kinds of RFPs the organization produces. Still, and not surprisingly, given its name, SAMSHA is the primary direct federal funder of grants for substance abuse and persistent mental illness prevention and treatment. With the recent and continuing tsunami of the twin co-related scourges of opioid use disorder (OUD) and homelessness, Congress has appropriated greater funding for SAMHSA and the agency is going through one of its cyclical rises in prominence in the grant firmament. Until we as a society get a handle on the opioid crisis, SAMHSA is going to get a lot of funding and attention.


* When writing a short proposal in response to a complex RFP, keep Rufo’s small luggage in Robert Heinlein’s Glory Road in mind: “Rufo’s baggage turned out to be a little black box about the size and shape of a portable typewriter. He opened it. And opened it again. And kept on opening it–And kept right on unfolding its sides and letting them down until the durn thing was the size of a small moving van and even more packed.” The bag was bigger on the inside than the outside, like a well-written SAMHSA proposal.

Another piece of the evaluation puzzle: Why do experiments make people unhappy?

The more time you spend around grants, grant writing, nonprofits, public agencies, and funders, the more apparent it becomes that the “evaluation” section of most proposals is only barely separate in genre from mythology and folktales, yet most grant RFPs include requests for evaluations that are, if not outright bogus, then at least improbable—they’re not going to happen in the real world. We’ve written quite a bit on this subject, for two reasons: one is my own intellectual curiosity, but the second is for clients who worry that funders want a real-deal, full-on, intellectually and epistemologically rigorous evaluation (hint: they don’t).

That’s the wind-up to “Why Do Experiments Make People Uneasy?“, Alex Tabarrok’s post on a paper about how “Meyer et al. show in a series of 16 tests that unease with experiments is replicable and general.” Tabarrok calls the paper “important and sad,” and I agree, but the paper also reveals an important (and previously implicit) point about evaluation proposal sections for nonprofit and public agencies: funders don’t care about real evaluations because a real evaluation will probably make the applicant, the funder, and the general public uneasy. Not only do they make people uneasy, but most people don’t even understand how a real evaluation works in a human-services organization, how to collect data, what a randomized controlled trial is, and so on.

There’s an analogous situation in medicine; I’ve spent a lot of time around doctors who are friends, and I’d love to tell some specific stories,* but I’ll say that while everyone is nominally in favor of “evidence-based medicine” as an abstract idea, most of those who superficially favor it don’t really understand what it means, how to do it, or how to make major changes based on evidence. It’s often an empty buzzword, like “best practices” or “patient-centered care.”

In many nonprofit and public agencies, evaluations and effectiveness are the same: everyone putatively believes in them, but almost no one understands them or wants real evaluations conducted. Plus, beyond that epistemic problem, even if evaluations are effective in a given circumstance (they’re usually not), they don’t necessarily transfer. If you’re curious about why, Experimental Conversations: Perspectives on Randomized Trials in Development Economics is a good place to start—and this is the book least likely to be read, out of all the books I’ve ever recommended here. Normal people like reading 50 Shades of Grey and The Name of the Rose, not Experimental Conversations.

In the meantime, some funders have gotten word about RCTs. For example, the Department of Justice’s (DOJ) Bureau of Justice Assistance’s (BJA) Second Chance Act RFPs have bonus points in them for RCTs. I’ll be astounded if more than a handful of applicants even attempt a real RCT—for one thing, there’s not enough money available to conduct a rigorous RCT, which typically requires paying the control group to follow up for long-term tracking. Whoever put the RCT in this RFP probably wasn’t thinking about that real-world issue.

It’s easy to imagine a world in which donors and funders demand real, true, and rigorous evaluations. But they don’t. Donors mostly want to feel warm fuzzies and the status that comes from being fawned over—and I approve those things too, by the way, as they make the world go round. Government funders mostly want to make congress feel good, while cultivating an aura of sanctity and kindness. The number of funders who will make nonprofit funding contingent on true evaluations is small, and the number willing to pay for true evaluations is smaller still. And that’s why we get the system we get. The mistake some nonprofits make is thinking that the evaluation sections of proposals are for real. They’re not. They’re almost pure proposal world.


* The stories are juicy and also not flattering to some of the residency and department heads involved.

Maybe reading is harder than I thought: On “The Comprehensive Family Planning and Reproductive Health Program”

We very occasionally pay attention to bidders conferences; usually, however, we usually avoid them for the reasons last discussed in “My first bidders conference, or, how I learned what I already knew.” Despite knowing that bidders conferences are mostly a waste of time, we’re sufficiently masochistic careful enough that we’ll occasionally look into one anyway.

New York State’s “Comprehensive Family Planning and Reproductive Health Program” bidders conference was a special example of silly because it literally consisted of the presenter reading from slides that regurgitated the RFP. As the “conference” went on, it became steadily more apparent that the conference would literally only consist of . . . repeating what’s in the RFP. This is as informative as it sounds.

After 20 minutes of listening to the presenter read, I gave up. I can read it myself. Still, as I shook my head at the seemingly pointless waste of time, my mind drifted back to some of my experiences teaching college students, and I have to wonder if the presenter read the RFP as a defensive strategy against inane questions that could easily be answered by the RFP. Something similar happens to me in class at times.

One recent example comes to mind. I had a student who seemed not to like to read much (note: this is a problem in English classes), and one day I handed out an essay assignment sheet with specific instructions on it. I told students to read it and let me know if they had questions. This student raised her hand and I had a conversation that went like this:

Student: “Can you just go over it in general?”
Me: “What’s confusing?”
Student: “I mean, can you just say in general what the assignment is about?”
Me: “That’s what the assignment sheet is for.”
Student: “I don’t understand. Can you go over it?”
Me: “What part confuses you?”
Student: “The entire thing.”
Me: “Which sentence is confusing to you?”
Student: “Can you just go over it in general?”

This was not a surrealist play and by the end of the exchange—I did not reproduce the whole exchange—I was somewhat confused, so I began reading each individual sentence and then checking in with the student. This was somewhat embarrassing for everyone in the class but I didn’t really know what else to do.

When I got to the end of the assignment sheet, the student agreed that it was in fact clear. I know enough about teaching not to ask the obvious question—”What was all this about?”—and yet I’ve had enough of those experiences to identify, just a little, with the people running the world’s boringest* bidders conferences.


* Not an actual word, but I think it fits here.

Why Do the Feds Keep RFP Issuance Dates a Secret? The Upcoming FY ’14 GEAR UP and YouthBuild RFP Illustrate the Obvious

An oddity of the Federal grant making process is that projected RFP issuance dates are usually kept secret.* Two cases in point illustrate how this works: the FY ’14 Department of Education GEAR-UP and Department of Labor YouthBuild competitions.

Last week, former clients contacted us about both programs. Both clients are well-connected with the respective funders and strongly believe that the RFPs will be soon issued, likely by the end of the month. We believe them, as both were seeking fee quotes to write their GEAR-UP or YouthBuild proposal. The challenge both face, however, is that the Department of Labor and Department of Education typically only provide about a 30-day period between RFP publication and the deadline. So, if you’re an average nonprofit not connected to the funding source, you can easily be blindsided by a sudden RFP announcement.

I’ve never understood why the Feds do this. Hollywood studios announce film premieres weeks and sometimes months in advance to build buzz. You know that when Apple holds an event at the Moscone Center, new products will be launched. Unlike most humans, though, the Feds think it’s a good idea to keep the exact timing of new funding opportunities a secret. This is beyond stupid, but they have been this way since I looked at my first Federal Register about 40 years ago. I don’t expect anything to change soon.

When we learn about likely upcoming RFPs, we usually note them in our free weekly Email Grant Alerts and, for particularly interesting announcements, at this blog. The best advice I can give you comes from that intrepid reporter Ned “Scotty” Scott at the end of Howard Hawks’s great 1951 SF film, The Thing from Another World:** “Watch the skies, everywhere! Keep looking. Keep watching the skies!”


* There are many oddities; this is just one.

** This movie has it all: monster loving scientist who spouts lots of stentorian Dr. Frankenstein bon mots about the importance of science, a rakish and fearless hero, a hot babe in a pointy bra, weird SF music, a claustrophobic setting that’s a precursor to “Alien” and many other movies, and James Arness (yes, that James Arness) as “The Thing.”

The unsolvable standardized data problem and the needs assessment monster

Needs assessments tend to come in two flavors: one basically instructs the applicant to “Describe the target area and its needs,” and the applicant chooses whatever data it can come up with. For most applicants that’ll be some combination of Census data, local Consolidated Plan, data gathered by the applicant in the course of providing services, news stories and articles, and whatever else they can scavenge. Some areas have well-known local data sources; Los Angles County, for example, is divided into eight Service Planning Areas (SPAs), and the County and United Way provide most data relevant to grant writers by SPA.

The upside to this system is that applicants can use whatever data makes the service area look worse (looking worse is better because it indicates greater need). The downside is that funders will get a heterogeneous mix of data that frequently can’t be compared from proposal to proposal. And since no one has the time or energy to audit or check the data, applicants can easily fudge the numbers.

High school dropout rates are a great example of the vagaries in data work: definitions of what constitutes a high school dropout vary from district to district, and many districts have strong financial incentives to avoid calling any particular student a “dropout.” The GED situation in the U.S. makes dropout statistics even harder to understand and compare; if a student drops out at age 16 and gets a GED at 18 is he a dropout or a high school graduate? The mobility of many high-school age students makes it harder still, as does the advent of charter schools, on-line instruction and the decline of the neighborhood school in favor of open enrollment policies. There is no universal way to measure this seemingly simple number.*

The alternative to the “do whatever” system is for the funder to say: You must use System X in manner Y. The funder gives the applicant a specific source and says, “Use this source to calculate the relevant information.” For example, the last round of YouthBuild funding required the precise Census topic and table name for employment statistics. Every applicant had to use “S2301 EMPLOYMENT STATUS” and “S1701 POVERTY STATUS IN THE PAST 12 MONTHS,” per page 38 of the SGA.

The SGA writers forgot, however, that not every piece of Census data is available (or accurate) for every jurisdiction. Since I’ve done too much data work for too many places, I’ve become very familiar with the “(X)” in American Factfinder2 tables—which indicates that the requested data is not available.

In the case of YouthBuild, the SGA also specifies that dropout data must be gathered using a site called Edweek. But dropout data can’t really be standardized for the reasons that I only began to describe in the third paragraph of this post (I stopped to make sure that you don’t kill yourself from boredom, which would leave a gory mess for someone else to clean up). As local jurisdictions experiment with charter schools and online education, the data in sources like Edweek is only going to become more confusing—and less accurate.

If a YouthBuild proposal loses a few need points because of unavailable or unreliable data sources, or data sources that miss particular jurisdictions (as Edweek does) it probably won’t be funded, since an applicant needs almost a perfect score to get a YouthBuild grant. We should know, as we’ve written at least two dozen funded YouthBuild proposals over the years.

Standardized metrics from funders aren’t always good, and some people will get screwed if their projects don’t fit into a simple jurisdiction or if their jurisdiction doesn’t collect data in the same way as another jurisdiction.

As often happens at the juncture between the grant world and the real world, there isn’t an ideal way around this problem. From the perspective of funders, uniform data requirements give an illusion of fairness and equality. From the perspective of applicants trapped by particular reporting requirements, there may not be a good way to resolve the problem.

Applicants can try contacting the program officer, but that’s usually a waste of time: the program officer will just repeat the language of the RFP back to the applicant and tell the applicant to use its best judgment.

The optimal way to deal with the problem is probably to explain the situation in the proposal and offer alternative data. That might not work. Sometimes applicants just get screwed, and not in the way most people like to get screwed, and there’s little to be done about it.


* About 15 years ago, Isaac actually talked to the demographer who worked at the Department of Education on dropout data. This was in the pre-Internet days, and he just happened to get the guy who works on this stuff after multiple phone transfers. He explained why true, comprehensive dropout data is impossible to gather nationally, and some of his explanations have made it to this blog post.

No one ever talks to people who do stuff like this, and when they find an interested party they’re often eager to chat about the details of their work.

FEMA’s Assistance for Firefighters Grants (AFG) Appears On Time

Years ago we had a series of spats with the Assistance To Firefighters Grants (AFG) program contact person, for reasons detailed in “Blast Bureaucrats for Inept Interpretations of Federal Regulations* and “FEMA and Grants.gov Together at Last,” both of which have a lot of complaining but also have a deeper lesson: it pays to make noise when federal and other bureaucrats aren’t doing their jobs. If nothing else, the noise makes it more likely that those bureaucrats will do their jobs right in the future.*

For us, that future is now. A new AFG RFP was just issued. While it has a short 30-day deadline, it appeared in the Grants.gov database in a timely manner. Now fire departments that want to apply will have a fair shot. And pretty much every fire department should apply: there are 2,500 grants available. I don’t know how many fire departments there are in the U.S., but I do know that 2,500 is appreciable portion of them and that 2,500 isn’t a typo—at least on our part.


* Plus, complaining is sometimes satisfying.

“Estimate” Means “Make It Up” In the Proposal and Grant Writing Worlds

Many RFPs ask for data that simply doesn’t exist—presumably because the people writing the RFPs don’t realize how hard it is to find phantom data. But other RFP writers realize that data can be hard to find and thus offer a way out through a magic word: “estimate.”

If you see the word “estimate” in an RFP, you can mentally substitute the term “make it up.” Chances are good that no one has the numbers being sought, and, consequently, you can shoot for a reasonable guess.

Instead of the word “estimate,” you’ll sometimes find RPPs that request very specific data and particular data sources. In the most recent YouthBuild funding round, for example, the RFP says:

Using data found at http://www.edweek.org/apps/gmap/, the applicant must compare the average graduation rate across all of the cities or towns to be served with the national graduation rate of 73.4% (based on Ed Week’s latest data from the class of 2009).

Unfortunately, that mapper, while suitably wizz-bang and high-tech appearing, didn’t work for some of the jurisdictions we tried to use it on, and, as if that weren’t enough, it doesn’t drill down to the high school level. It’s quite possible and often likely that a given high school is in a severely economically distressed area embedded in a larger, more prosperous community is going to have a substantially lower graduation rate than the community at large. This problem left us with a conundrum: we could report the data as best we could and lose a lot of points, or we could report the mapper’s data and then say, “By the way, it’s not accurate, and here’s an alternative estimate based on the following data.” That at least has the potential to get some points.

We’ve found this general problem in RFPs other than YouthBuild, but I can’t find another good example off the top of my head, although HRSA New Access Point (NAP) FOAs and Carol M. White Physical Education Program (PEP) RFPs are also notorious for requesting difficult or impossible to find data.

If you don’t have raw numbers but you need to turn a proposal in, then you should estimate as best you can. This isn’t optimal, and we don’t condone making stuff up. But realize that if other people are making stuff up and you’re not, they’re going to get the grant and you’re not. Plus, if you’re having the problem finding data, there’s a decent chance everyone else is too.

When It’s Good For At-Risk Youth to Hang Out At McDonald’s: Searching for Connectivity in All the Wrong Places

The Web-Deprived Study at McDonald’s” describes a role reversal: in the usual proposal universe, McDonald’s is the enemy—a purveyor of simple sugars and nutritionally bankrupt edible food-like substances that help drive obesity and disease. Internet service providers (ISPs), however, are supposed purveyors of knowledge and connections vital to linking the modern world. But many American ISPs have effectively no competition, and they charge accordingly—which means that many low-income families can’t afford Internet access*

As a result, “Access to the Web has expanded [in recent years . . . ] but roughly a third of households with income of less than $30,000 a year and teens living at home still don’t have broadband access there, according to the Pew Research Center.” So McDonald’s, which offers free WiFi in most of its restaurants (using the term loosely), is the unexpected corporate hero in this article. Astute grant writers should pay attention, because future projects dealing with food and nutrition also need to address the digital divide.

In the next Carol M. White Physical Education Proposals we write, for example, we’ll stress WiFi access points at the project delivery sites and libraries, which offer an alternative to McDonald’s, with its allure of Big Macs and McNuggets.

But in proposals that deal with connectivity—like the 21st Century Community Learning Centers program—we’ll mention that McDonald’s, Starbucks, and other large corporations offer free Internet access, and, as a result, participants may be tempted buy their non-nutritious food to get the WiFi. So those participants need the alternatives that the project will provide—along with nutritional counseling. Our hypothetical proposal will point out that nutritional counseling might seem counter-intuitive, but sometimes well-supported yet counter-intuitive arguments seem stronger.


* A brief example: when I lived in Tucson, Arizona, Comcast offered 12 Mbs down for $60 a month. Qwest, the only “competition,” offered . . . 1.5 Mbs down. Max. In 1998, that would have been incredible. Today, it’s a joke. Comcast was (and is) a de facto monopoly and charged accordingly.

By contrast, now I’m living in Manhattan, where Verizon, RCN, Time Warner, and others offer Internet connections; I’m buying 25 Mbs down for $30 a month from RCN. Of course, I can still wander over to Starbucks for their “free” WiFi, but at $4 a cup for a wet cap no fat, the free access access gets pretty pricy pretty fast. I can’t bring myself to enter the McDonald’s’ den of food inequity.

The Feds dumped RFPs right before Christmas

I have the misfortune of reading the Federal Register each day, which is every bit as scintillating as it sounds.* This week I noticed that at least two dozen RFPs were released on December 23—the Friday before Christmas.

The timing probably isn’t coincidence—much of the nonprofit world effectively shuts down between Christmas and New Year’s every year (although we usually get a couple of calls from unusual nonprofits who are thinking about their next moves as the year winds down). The various agencies who issued RFPs know as much. They might have some internal reason for issuing RFPs when they did. But I’m skeptical and suspect that they wanted to bury some of these notices.

Which means you should take a closer-than-usual look. If you do, you’ll find some very interesting RFPs, like the Community Development Financial Institutions Program (CDFI; it has has $123,000,000 available with more than 100 grants to be made) or the Assistance to Firefighters Grant Program Fire Prevention and Safety Grants (AFG; it has $35,000,000 available and more than 200 grants to be made).**

Government entities, corporations, and other organizations routinely release bad news on Friday afternoons, when they know the bad news will get less press. We’re likely seeing a similar dynamic here, especially because you can just about guarantee that program officers contacts won’t be available until after the New Year. Incidentally, if you’re looking for politically unpalatable regulations, this is also an excellent time of year to be studying the Federal Register.

The Federal Register offers useful insights into the federal process at work, and the minds behind this process. This aspect of our business helps us build weird bits of insight that we’d otherwise lack and that we’re happy to share with you.


* If that weren’t enough, I’m also on all sorts of foundation and state e-mail lists. The result is usually tedium, intermixed with the occasional major grant opportunity.

** AFG is a program of special interest to me for reasons explained in “FEMA Tardiness, Grants.gov, and Dealing with Recalcitrant Bureaucrats” and its sequels. As those posts describe, Seliger + Associates is doing its part to bring YOU better government.