Posted on Leave a comment

What counts as an eligible service area for SAMHSA’s “Resiliency in Communities After Stress and Trauma” (ReCAST) program?

Long ago, we wrote about what grant writers and applicants should do when confronted by a poorly organized RFP; because little external pressure pushes federal agencies to write RFPs that make sense, one finds too many RFPs that leave a lot of questions. SAMHSA’s “Resiliency in Communities After Stress and Trauma” (ReCAST) Notice of Funding Opportunity (NOFO) is a case in point: eligible applicants are those “communities that have recently faced civil unrest, community violence, and/or collective trauma within the past 24 months.” Okay: the NOFO will surely get more specific, right? But the ReCAST NOFO says that “Community violence is defined as the exposure to intentional acts of interpersonal violence committed in public spaces by individuals who are not related to the victim.” Okay: but how much violence? Do two murders count? Do two instances of battery count? Almost every city of any size has likely experienced at least two “intentional acts of interpersonal violence” committed by strangers in the prior 24 months. So how much is enough? Is more better, for purposes of being funded by this program? How are applicants to judge the feasibility of being funded? Being able to have some sense of eligibility is key, because preparing and submitting a SAMHSA application isn’t a minor endeavor.

Then there is the issue of “collective trauma.” Do natural disasters count? I’ve read the definitions of “collective trauma” on pages 8 – 9 of the ReCAST NOFO, and I’ve gone through all 41 uses of the word “trauma,” but I don’t see an answer to that specific question. Natural disasters are violent and often cause injury and death, which makes me lean towards “yes,” but the emphasis on “civil unrest” seems to point to a very specific set of issues that SAMHSA has in mind.

So I sent an email to the SAMHSA contact person, Jennifer Treger, asking her a version of the above. She wrote back: “Thank you for your inquiry. Please refer back to the definition that you have pointed out on pages 8-9 of the funding opportunity. If you determine your community meets the eligibility based on the definitions, please feel free to submit an application.” But how am I, or anyone else, supposed to judge whether a specific community is eligible based on that vague definition? I tried asking her in another version, and she reiterated, unhelpfully, that “We can only respond to what is in the NOFO.”

She also wrote that: “You can determine if you feel your community meets the definition for Collective Trauma as stated in the NOFO.” But the problem is that how I “feel” doesn’t matter at all to SAMHSA in determining eligibility; only SAMHSA’s judgments matter (SAMHSA has the money). It’d be useful for SAMHSA to list, in its view, which communities have had sufficient “civil unrest, community violence, and/or collective trauma within the past 24 months” to qualify for ReCAST. Or, alternately, what metrics they’d use. An FBI Uniform Crime Rate (UCR) of x per 1,000 people, for example, would be a specific metric.

Too many federal agencies love the latitude that vagueness implies. It’s hard to advise our clients on whether they should apply to ReCAST without more specifics, but those specifics evidently aren’t going to be forthcoming. I guess we’ll have to try to look at our feelings and our client’s feelings, and hope SAMHSA feels what we feel.

For more on similar matters, see RFP Lunacy and Answering Repetitive or Impossible Questions: HRSA and Dental Health Edition.

Posted on Leave a comment

Grant writers and climate change: The Department of Energy’s Direct Air Capture program

The Department of Energy (DOE) just issued an unusual RFP, for a subject I can’t recall seeing the DOE previously wanting to fund fund: direct air capture (DAC) of CO2, the whole name of which will exhaust even patient readers: “Direct Air Capture Combined With Dedicated Long-Term Carbon Storage, Coupled to Existing Low-Carbon Energy.” Right now, DAC is in its infancy; this 2019 article summarizes the DAC situation, and Stripe Climate covers the overall need for DAC; Seliger + Associates hasn’t yet worked on a DAC project, although we have worked on projects related to geothermal energy, lithium metal, lithium batteries, flow batteries, resource recovery, and probably a few more I’m leaving out.* In these assignments, we utilize the approach described in “How we write scientific and technical grant proposals,” and we’re eager to work on DAC projects—if you’ve found your way here and are looking for a DOE grant writing, by all means give us a call at 800.540.8906 ext. 1, or email us at seliger@seliger.com. DAC’s immaturity makes it a particularly striking area for work, and, while the DOE program only has five awards available, it does have $15 million for grants “to better understand system costs, performance, as well as business case options for existing DAC technologies co-located with low-carbon thermal energy sources or industrial facilities.”

Specific activities are listed, too: “The objective of this FOA is to execute and complete front-end engineering design (FEED) studies of advanced DAC systems capable of removing a minimum of 5,000 tonne/yr. net CO2 from air based on a life cycle analysis (LCA), suitable for long duration carbon storage (i.e., geological storage or subsurface mineralization) or CO2 conversion/utilization (e.g., including, but not limited to, synthetic aggregates production, concrete production, and low carbon synthetic fuels and chemicals production).” It’s likely that the firms specializing in FEED don’t specialize in grant writing or storytelling, and that’s where we come into play.

DAC is still extremely expensive and infeasible on a scale that would affect global climate change, but it’s also getting cheaper fast—and that’s the same pattern of falling costs we’ve seen with batteries, solar, and wind—all of which have consistently fallen in price far faster than even their most ardent advocates would’ve hoped. Solar, wind, and batteries now appear to have a lower levelized cost of energy (LCOE) than methane plants, in many parts of the world, and, if another power source deals with baseload power, they can provide around 50% of total energy.

Ideally, forty years ago, humans would’ve collectively acted on the need for carbon emission reductions by building out nuclear power, introducing carbon taxes, and taking similar measures. We collectively did the opposite, and global CO2 levels are now in the 420 parts per million (ppm) range, and they’re almost certainly going to rise above 500 ppm in the coming decades. Pre-industrialize global CO2 rates were in the 230 – 250 ppm range, and, the last time carbon dioxide ppm was this high, the world was in the range of five degrees celsius warmer than it is now—or has been through human history. In the scheme of the world economy, $15 million isn’t a lot—but it’s a start.

*Several times over the years, we’ve gotten calls from inventors pitching the elusive perpetual motion machine. While fun to talk to these guys (and they’re always guys) we’ve so far declined to accept one of these jobs!

Posted on Leave a comment

Confusing NIH and other Small Business Innovation and Research (SBIR) application guidance

In theory, an “application guide” for a Small Business Innovation and Research (SBIR) grant from a federal agency is meant to make the application process easier: the applicant should presumably be able to read the application guide and follow it, right? Wrong, as it turns out. The difficulties start with finding the application guide and associated RFP (or “FOA,” Funding Opportunity Announcement in NIH-land) . If you go to grants.gov today, Sept. 9, dear reader, and search for “SBIR,” you’ll get 74 matching results—most for National Institutes of Health (NIH) programs, which we’ll use as an example for the sake of this exercise, and because I worked on one recently. I’m going to use “PA-18-705 SBIR Technology Transfer (R43/R44 Clinical Trial Not Allowed)” program, which has download instructions at Grants.gov. When you download and review the “instructions,” however, you’ll find this complication:

It is critical that applicants follow the SBIR/STTR (B) Instructions in the SF424 (R&R) SBIR/STTR Application Guide (//grants.nih.gov/grants/guide/url_redirect.htm?id=32000)except where instructed to do otherwise (in this FOA or in a Notice from the NIH Guide for Grants and Contracts (//grants.nih.gov/grants/guide/)). Conformance to all requirements (both in the Application Guide and the FOA) is required and strictly enforced.

Notice that the URLs in the quoted section are incomplete: it’s up the applicant to track down the true SBIR application guide and correct FOA. I did that, but the tricky phrase is “follow the SBIR/STTR (B) Instructions […] except where instructed to do otherwise.” For the particular NIH application we were working on, the FOA and the Application Guide disagreed with each other concerning how the narrative should be structured and what an applicant needed to include in their proposal. So what’s an applicant, or, in this case, a hired-gun grant writer, to do? With some SBIRs, there is no canonical set of questions and responses: there’s the “general” set of questions and the FOA-specific set, with no instructions about how reconcile them.

To solve this conundrum, I decided to develop a hybridized version for the proposal structure: I used the general narrative structuring questions from the application guide, and I tacked on any extra questions that I could discern in the program-specific FOA. The only plausible alternative to this hybridized approach would have been to contact the NIH program officer listed in the FOA. As an experienced grant writer, however, I didn’t reach out, because I know that program officers confronted with issues like this will respond with a version of “That’s an interesting question. Read the FOA.”

The challenge of multiple, conflicting SBIR guidance documents isn’t exclusive to the NIH: we’ve worked on Dept. of Energy (DOE) SBIRs that feature contradictory guides, FOAs/RFPs, and related documents. It takes a lot of double checking and cross checking to try to make sure nothing’s been missed. The real question is why inherently science-based agencies like NIH and DOE are seemingly incapable of producing the same kind of single RFP documents typically used by DHHS, DOL, etc. Also, it’s very odd that we’ve never worked on an SBIR proposal for which the federal agency has provided a budget template in Excel. In the NIH example discussed above, the budget form was in Acrobat, which means I had to model it in Excel. Excel has been the standard for spreadsheets/budgets since the ’80s.

We (obviously) work on grant applications all the time, and yet the SBIR reconciliation process is confusing and difficult even for us professional grant writers. The SBIR narratives, once we understand how to structure them, usually aren’t very challenging for us to write, but getting to the right structure sure is. For someone not used to reading complicated grant documents, and looking at SBIR guidance documents for the first time, the process would be a nightmare. Making SBIRs “easier” with extra, generic application guides that can be unpredictably superseded actually makes the process harder. This is good for our business but bad for science and innovation.

Posted on Leave a comment

Why we like writing SAMHSA proposals: the RFP structure is clear and never changes

We wrote our first funded Substance Abuse and Mental Health Administration (SAMHSA) grant about 25 years ago, and there’s something notable about SAMHSA: unlike virtually all of their federal agency sisters, SAMHSA RFPs are well structured. Even better, the RFP structure seemingly never changes—or at least not for the past quarter century. This makes drafting a SAMHSA proposal refreshingly straightforward and enables us, and other competent writers, to (relatively) easily and coherently spin our grant writing “Tales of Brave Ulysses.” The word “coherently” in the preceding sentence is important: RFPs that destroy narrative flow by asking dozens of unrelated sub-questions also destroy the coherence of the story the writer is trying to tell and the program the writer is trying to describe. SAMHSA RFPs typically allow the applicant to answer the 5Ws and H.

A SAMHSA RFP almost always uses a variation on a basic, five element structure:

  • Section A: Population of Focus and Statement of Need
  • Section B: Proposed Implementation Approach
  • Section C: Proposed Evidence-Based Service/Practice
  • Section D: Staff and Organizational Experience
  • Section E: Data Collection and Performance Measurement

While SAMHSA RFPs, of course, include many required sub-headers that demand corresponding details, this structure lends itself to the standard outline format that we prefer (e.g., I.A.1.a). We like using outlines, because it makes it easy for us to organize our presentation and for reviewers to find responses to specific items requested in the RFP—as long as the outlines make sense and, as noted above, don’t interrupt narrative flow. In this respect, SAMHSA RFPs are easy for us to work with.

In recent years, SAMSHA has also reduced the maximum proposal length (exclusive of many required attachments) from 25 single-spaced pages to, in many cases, 10 single-spaced pages. Although it’s generally harder to write about complex subjects with a severe page limit than a much longer page limit, we’re good at packing a lot into a small space.* A novice grant writer, however, is likely to be intimidated by a SAMHSA RFP, due to the forbidding nature of the typical project concept and the brief page limit. In our experience, very long proposals are rarely better and are often worse than shorter ones.

We haven’t talked in this post about what SAMHSA does, because the nature of the organization’s mission doesn’t necessarily affect the kinds of RFPs the organization produces. Still, and not surprisingly, given its name, SAMSHA is the primary direct federal funder of grants for substance abuse and persistent mental illness prevention and treatment. With the recent and continuing tsunami of the twin co-related scourges of opioid use disorder (OUD) and homelessness, Congress has appropriated greater funding for SAMHSA and the agency is going through one of its cyclical rises in prominence in the grant firmament. Until we as a society get a handle on the opioid crisis, SAMHSA is going to get a lot of funding and attention.


* When writing a short proposal in response to a complex RFP, keep Rufo’s small luggage in Robert Heinlein’s Glory Road in mind: “Rufo’s baggage turned out to be a little black box about the size and shape of a portable typewriter. He opened it. And opened it again. And kept on opening it–And kept right on unfolding its sides and letting them down until the durn thing was the size of a small moving van and even more packed.” The bag was bigger on the inside than the outside, like a well-written SAMHSA proposal.

Posted on Leave a comment

Another piece of the evaluation puzzle: Why do experiments make people unhappy?

The more time you spend around grants, grant writing, nonprofits, public agencies, and funders, the more apparent it becomes that the “evaluation” section of most proposals is only barely separate in genre from mythology and folktales, yet most grant RFPs include requests for evaluations that are, if not outright bogus, then at least improbable—they’re not going to happen in the real world. We’ve written quite a bit on this subject, for two reasons: one is my own intellectual curiosity, but the second is for clients who worry that funders want a real-deal, full-on, intellectually and epistemologically rigorous evaluation (hint: they don’t).

That’s the wind-up to “Why Do Experiments Make People Uneasy?“, Alex Tabarrok’s post on a paper about how “Meyer et al. show in a series of 16 tests that unease with experiments is replicable and general.” Tabarrok calls the paper “important and sad,” and I agree, but the paper also reveals an important (and previously implicit) point about evaluation proposal sections for nonprofit and public agencies: funders don’t care about real evaluations because a real evaluation will probably make the applicant, the funder, and the general public uneasy. Not only do they make people uneasy, but most people don’t even understand how a real evaluation works in a human-services organization, how to collect data, what a randomized controlled trial is, and so on.

There’s an analogous situation in medicine; I’ve spent a lot of time around doctors who are friends, and I’d love to tell some specific stories,* but I’ll say that while everyone is nominally in favor of “evidence-based medicine” as an abstract idea, most of those who superficially favor it don’t really understand what it means, how to do it, or how to make major changes based on evidence. It’s often an empty buzzword, like “best practices” or “patient-centered care.”

In many nonprofit and public agencies, evaluations and effectiveness are the same: everyone putatively believes in them, but almost no one understands them or wants real evaluations conducted. Plus, beyond that epistemic problem, even if evaluations are effective in a given circumstance (they’re usually not), they don’t necessarily transfer. If you’re curious about why, Experimental Conversations: Perspectives on Randomized Trials in Development Economics is a good place to start—and this is the book least likely to be read, out of all the books I’ve ever recommended here. Normal people like reading 50 Shades of Grey and The Name of the Rose, not Experimental Conversations.

In the meantime, some funders have gotten word about RCTs. For example, the Department of Justice’s (DOJ) Bureau of Justice Assistance’s (BJA) Second Chance Act RFPs have bonus points in them for RCTs. I’ll be astounded if more than a handful of applicants even attempt a real RCT—for one thing, there’s not enough money available to conduct a rigorous RCT, which typically requires paying the control group to follow up for long-term tracking. Whoever put the RCT in this RFP probably wasn’t thinking about that real-world issue.

It’s easy to imagine a world in which donors and funders demand real, true, and rigorous evaluations. But they don’t. Donors mostly want to feel warm fuzzies and the status that comes from being fawned over—and I approve those things too, by the way, as they make the world go round. Government funders mostly want to make congress feel good, while cultivating an aura of sanctity and kindness. The number of funders who will make nonprofit funding contingent on true evaluations is small, and the number willing to pay for true evaluations is smaller still. And that’s why we get the system we get. The mistake some nonprofits make is thinking that the evaluation sections of proposals are for real. They’re not. They’re almost pure proposal world.


* The stories are juicy and also not flattering to some of the residency and department heads involved.

Posted on Leave a comment

Maybe reading is harder than I thought: On “The Comprehensive Family Planning and Reproductive Health Program”

We very occasionally pay attention to bidders conferences; usually, however, we usually avoid them for the reasons last discussed in “My first bidders conference, or, how I learned what I already knew.” Despite knowing that bidders conferences are mostly a waste of time, we’re sufficiently masochistic careful enough that we’ll occasionally look into one anyway.

New York State’s “Comprehensive Family Planning and Reproductive Health Program” bidders conference was a special example of silly because it literally consisted of the presenter reading from slides that regurgitated the RFP. As the “conference” went on, it became steadily more apparent that the conference would literally only consist of . . . repeating what’s in the RFP. This is as informative as it sounds.

After 20 minutes of listening to the presenter read, I gave up. I can read it myself. Still, as I shook my head at the seemingly pointless waste of time, my mind drifted back to some of my experiences teaching college students, and I have to wonder if the presenter read the RFP as a defensive strategy against inane questions that could easily be answered by the RFP. Something similar happens to me in class at times.

One recent example comes to mind. I had a student who seemed not to like to read much (note: this is a problem in English classes), and one day I handed out an essay assignment sheet with specific instructions on it. I told students to read it and let me know if they had questions. This student raised her hand and I had a conversation that went like this:

Student: “Can you just go over it in general?”
Me: “What’s confusing?”
Student: “I mean, can you just say in general what the assignment is about?”
Me: “That’s what the assignment sheet is for.”
Student: “I don’t understand. Can you go over it?”
Me: “What part confuses you?”
Student: “The entire thing.”
Me: “Which sentence is confusing to you?”
Student: “Can you just go over it in general?”

This was not a surrealist play and by the end of the exchange—I did not reproduce the whole exchange—I was somewhat confused, so I began reading each individual sentence and then checking in with the student. This was somewhat embarrassing for everyone in the class but I didn’t really know what else to do.

When I got to the end of the assignment sheet, the student agreed that it was in fact clear. I know enough about teaching not to ask the obvious question—”What was all this about?”—and yet I’ve had enough of those experiences to identify, just a little, with the people running the world’s boringest* bidders conferences.


* Not an actual word, but I think it fits here.

Posted on 3 Comments

Why Do the Feds Keep RFP Issuance Dates a Secret? The Upcoming FY ’14 GEAR UP and YouthBuild RFP Illustrate the Obvious

An oddity of the Federal grant making process is that projected RFP issuance dates are usually kept secret.* Two cases in point illustrate how this works: the FY ’14 Department of Education GEAR-UP and Department of Labor YouthBuild competitions.

Last week, former clients contacted us about both programs. Both clients are well-connected with the respective funders and strongly believe that the RFPs will be soon issued, likely by the end of the month. We believe them, as both were seeking fee quotes to write their GEAR-UP or YouthBuild proposal. The challenge both face, however, is that the Department of Labor and Department of Education typically only provide about a 30-day period between RFP publication and the deadline. So, if you’re an average nonprofit not connected to the funding source, you can easily be blindsided by a sudden RFP announcement.

I’ve never understood why the Feds do this. Hollywood studios announce film premieres weeks and sometimes months in advance to build buzz. You know that when Apple holds an event at the Moscone Center, new products will be launched. Unlike most humans, though, the Feds think it’s a good idea to keep the exact timing of new funding opportunities a secret. This is beyond stupid, but they have been this way since I looked at my first Federal Register about 40 years ago. I don’t expect anything to change soon.

When we learn about likely upcoming RFPs, we usually note them in our free weekly Email Grant Alerts and, for particularly interesting announcements, at this blog. The best advice I can give you comes from that intrepid reporter Ned “Scotty” Scott at the end of Howard Hawks’s great 1951 SF film, The Thing from Another World:** “Watch the skies, everywhere! Keep looking. Keep watching the skies!”


* There are many oddities; this is just one.

** This movie has it all: monster loving scientist who spouts lots of stentorian Dr. Frankenstein bon mots about the importance of science, a rakish and fearless hero, a hot babe in a pointy bra, weird SF music, a claustrophobic setting that’s a precursor to “Alien” and many other movies, and James Arness (yes, that James Arness) as “The Thing.”

Posted on 1 Comment

The unsolvable standardized data problem and the needs assessment monster

Needs assessments tend to come in two flavors: one basically instructs the applicant to “Describe the target area and its needs,” and the applicant chooses whatever data it can come up with. For most applicants that’ll be some combination of Census data, local Consolidated Plan, data gathered by the applicant in the course of providing services, news stories and articles, and whatever else they can scavenge. Some areas have well-known local data sources; Los Angles County, for example, is divided into eight Service Planning Areas (SPAs), and the County and United Way provide most data relevant to grant writers by SPA.

The upside to this system is that applicants can use whatever data makes the service area look worse (looking worse is better because it indicates greater need). The downside is that funders will get a heterogeneous mix of data that frequently can’t be compared from proposal to proposal. And since no one has the time or energy to audit or check the data, applicants can easily fudge the numbers.

High school dropout rates are a great example of the vagaries in data work: definitions of what constitutes a high school dropout vary from district to district, and many districts have strong financial incentives to avoid calling any particular student a “dropout.” The GED situation in the U.S. makes dropout statistics even harder to understand and compare; if a student drops out at age 16 and gets a GED at 18 is he a dropout or a high school graduate? The mobility of many high-school age students makes it harder still, as does the advent of charter schools, on-line instruction and the decline of the neighborhood school in favor of open enrollment policies. There is no universal way to measure this seemingly simple number.*

The alternative to the “do whatever” system is for the funder to say: You must use System X in manner Y. The funder gives the applicant a specific source and says, “Use this source to calculate the relevant information.” For example, the last round of YouthBuild funding required the precise Census topic and table name for employment statistics. Every applicant had to use “S2301 EMPLOYMENT STATUS” and “S1701 POVERTY STATUS IN THE PAST 12 MONTHS,” per page 38 of the SGA.

The SGA writers forgot, however, that not every piece of Census data is available (or accurate) for every jurisdiction. Since I’ve done too much data work for too many places, I’ve become very familiar with the “(X)” in American Factfinder2 tables—which indicates that the requested data is not available.

In the case of YouthBuild, the SGA also specifies that dropout data must be gathered using a site called Edweek. But dropout data can’t really be standardized for the reasons that I only began to describe in the third paragraph of this post (I stopped to make sure that you don’t kill yourself from boredom, which would leave a gory mess for someone else to clean up). As local jurisdictions experiment with charter schools and online education, the data in sources like Edweek is only going to become more confusing—and less accurate.

If a YouthBuild proposal loses a few need points because of unavailable or unreliable data sources, or data sources that miss particular jurisdictions (as Edweek does) it probably won’t be funded, since an applicant needs almost a perfect score to get a YouthBuild grant. We should know, as we’ve written at least two dozen funded YouthBuild proposals over the years.

Standardized metrics from funders aren’t always good, and some people will get screwed if their projects don’t fit into a simple jurisdiction or if their jurisdiction doesn’t collect data in the same way as another jurisdiction.

As often happens at the juncture between the grant world and the real world, there isn’t an ideal way around this problem. From the perspective of funders, uniform data requirements give an illusion of fairness and equality. From the perspective of applicants trapped by particular reporting requirements, there may not be a good way to resolve the problem.

Applicants can try contacting the program officer, but that’s usually a waste of time: the program officer will just repeat the language of the RFP back to the applicant and tell the applicant to use its best judgment.

The optimal way to deal with the problem is probably to explain the situation in the proposal and offer alternative data. That might not work. Sometimes applicants just get screwed, and not in the way most people like to get screwed, and there’s little to be done about it.


* About 15 years ago, Isaac actually talked to the demographer who worked at the Department of Education on dropout data. This was in the pre-Internet days, and he just happened to get the guy who works on this stuff after multiple phone transfers. He explained why true, comprehensive dropout data is impossible to gather nationally, and some of his explanations have made it to this blog post.

No one ever talks to people who do stuff like this, and when they find an interested party they’re often eager to chat about the details of their work.

Posted on Leave a comment

FEMA’s Assistance for Firefighters Grants (AFG) Appears On Time

Years ago we had a series of spats with the Assistance To Firefighters Grants (AFG) program contact person, for reasons detailed in “Blast Bureaucrats for Inept Interpretations of Federal Regulations* and “FEMA and Grants.gov Together at Last,” both of which have a lot of complaining but also have a deeper lesson: it pays to make noise when federal and other bureaucrats aren’t doing their jobs. If nothing else, the noise makes it more likely that those bureaucrats will do their jobs right in the future.*

For us, that future is now. A new AFG RFP was just issued. While it has a short 30-day deadline, it appeared in the Grants.gov database in a timely manner. Now fire departments that want to apply will have a fair shot. And pretty much every fire department should apply: there are 2,500 grants available. I don’t know how many fire departments there are in the U.S., but I do know that 2,500 is appreciable portion of them and that 2,500 isn’t a typo—at least on our part.


* Plus, complaining is sometimes satisfying.

Posted on 1 Comment

“Estimate” Means “Make It Up” In the Proposal and Grant Writing Worlds

Many RFPs ask for data that simply doesn’t exist—presumably because the people writing the RFPs don’t realize how hard it is to find phantom data. But other RFP writers realize that data can be hard to find and thus offer a way out through a magic word: “estimate.”

If you see the word “estimate” in an RFP, you can mentally substitute the term “make it up.” Chances are good that no one has the numbers being sought, and, consequently, you can shoot for a reasonable guess.

Instead of the word “estimate,” you’ll sometimes find RPPs that request very specific data and particular data sources. In the most recent YouthBuild funding round, for example, the RFP says:

Using data found at http://www.edweek.org/apps/gmap/, the applicant must compare the average graduation rate across all of the cities or towns to be served with the national graduation rate of 73.4% (based on Ed Week’s latest data from the class of 2009).

Unfortunately, that mapper, while suitably wizz-bang and high-tech appearing, didn’t work for some of the jurisdictions we tried to use it on, and, as if that weren’t enough, it doesn’t drill down to the high school level. It’s quite possible and often likely that a given high school is in a severely economically distressed area embedded in a larger, more prosperous community is going to have a substantially lower graduation rate than the community at large. This problem left us with a conundrum: we could report the data as best we could and lose a lot of points, or we could report the mapper’s data and then say, “By the way, it’s not accurate, and here’s an alternative estimate based on the following data.” That at least has the potential to get some points.

We’ve found this general problem in RFPs other than YouthBuild, but I can’t find another good example off the top of my head, although HRSA New Access Point (NAP) FOAs and Carol M. White Physical Education Program (PEP) RFPs are also notorious for requesting difficult or impossible to find data.

If you don’t have raw numbers but you need to turn a proposal in, then you should estimate as best you can. This isn’t optimal, and we don’t condone making stuff up. But realize that if other people are making stuff up and you’re not, they’re going to get the grant and you’re not. Plus, if you’re having the problem finding data, there’s a decent chance everyone else is too.