Posted on Leave a comment

Yours is not the only organization that isn’t worried about long-term grant evaluations

Ten years ago, in “Studying Programs is Hard to Do: Why It’s Difficult to Write a Compelling Evaluation,” we explained why real program evaluations are hard and why the overwhelming majority of grant-funded programs don’t demand them; instead, they want cargo cult evaluations. Sometimes, real, true evaluations or follow-up data for programs like YouthBuild are actively punished:

As long as we’re talking about data, I can also surmise that the Dept. of Labor is implicitly encouraging applicants to massage data. For example, existing applicants have to report on the reports they’ve previously submitted to the DOL, and they get points for hitting various kinds of targets. In the “Placement in Education or Employment” target, “Applicants with placement rates of 89.51% or higher will receive 8 points for this subsection,” and for “Retention in Education or Employment,” Applicants with retention rates of 89.51% or higher will receive 8 points for this subsection.” Attaining these rates with a very difficult-to-reach population is, well, highly improbable.

That means a lot of previously funded applicants have also been. . . rather optimistic with their self-reported data.

To be blunt, no one working with the hard-to-serve YouthBuild population is going to get 90% of their graduates in training or employment. That’s just not possible. But DOL wants it to be possible, which means applicants need to find a way to make it seem possible / true.

So. That brings us to a much more serious topic, in the form of “The Engineer vs. the Border Patrol: One man’s quest to outlaw Customs and Border Protection’s internal, possibly unconstitutional immigration checkpoints,” which is a compelling, beautiful, and totally outrageous read. It is almost impossible to read that story and not come away fuming at the predations of the Border Patrol. Leaving that aspect aside, however, this stood out to me:

Regarding Operation Stonegarden, the DHS IG issued a report in late 2017 that was blunt in its assessment: “FEMA and CBP have not collected reliable program data or developed measures to demonstrate program performance resulting from the use of more than $531.5 million awarded under Stonegarden since FY 2008.”

Even in parts of government where outcomes really matter, it’s possible to have half a billion dollars disappear, and, basically, no one cares. If FEMA can lose all that money and not even attempt to measure whether the money is being spent semi-effectively, what does that communicate to average grant-funded organizations that get a couple of hundred thousand dollars per year?

We’re not telling you to lie in evaluation sections of your proposal. But we are reminding you, as we often do, about the difference between the real world and the proposal world. What you do with that information is up to you.

Posted on 1 Comment

The HRSA Uniform Data Source (UDS) Mapper: A complement to Census data

By now you’re familiar with writing needs assessments and you’re familiar with using Census data in the needs assessment. While Census data is useful for economic, language, and many other socioeconomic indicators, it’s not very useful for most health surveillance data—and most health-related data is hard to get. This is because it’s collected in weird ways, by county or state entities, and often compiled into reports for health districts and other non-standard sub-geographies that don’t match up with census tracks or even municipal boundaries. The collection and reporting mess often makes it to compare various areas. Enter HRSA’s Uniform Data Source (USD) Mapper tool.

I don’t know the specifics about the UDS Mapper’s genesis, but I’ll guess that HRSA got tired of receiving proposals that used a hodgepodge of non-comparable data sources derived from a byzantine collection of sources, some likely reliable and some likely less than reliable. To take one example we’re intimately familiar with, the five Service Planning Areas (SPAs) for which LA Country aggregates most data. If you’ve written proposals to LA City or LA County, you’ve likely encountered SPA data. While SPA data is very useful, it doesn’t contain much, if any, health care data. Healthcare data is largely maintained by the LA County Health Department and doesn’t correspond to SPAs, leaving applicants frustrated.

(As an aside, school data is yet another wrinkle in this, since it’s usually collected by school or by district, and those sources usually don’t match up with census tracks or political sub-divisions. There’s also Kids Count data, but that is usually sorted by state or county—not that helpful for a target area in the huge LA County with a population of 10 million.)

The UDS Mapper combines Census data with reports from Section 330 providers, then sorts that information by zip code, city, county, and state levels. It’s not perfect and should probably not be your only data source. But it’s a surprisingly rich and robust data source that most non-FQHCs don’t yet know about.

Everyone knows about Census data. Most know about Google Scholar, which can be used to improve the citations and scholarly framework of your proposal (and this is a grant proposal, so no one checks the cites, but they do notice if they’re there or not). HRSA hasn’t done much to promote UDS data outside the cloistered confines of FQHCs. So we’re doing our part to make sure you know about the new data goldmine.

Posted on 2 Comments

Sometimes a call will get you the data you need

This weekend I was working on a proposal that requires California education data. The California Department of Education has a decent data engine at the aptly-named DataQuest, so I was able to look the data up—but the data didn’t really make sense. One school in the target area, for example, had 30,700 students listed as attending. As anyone who has attended or seen an American high school knows, that number is absurd. Other data seemed off too, but I wasn’t sure what to do, so I included it as listed by the website and moved on with the rest of the proposal.

This morning, Isaac was editing the draft and noticed the dubious data, so he decided to call LAUSD’s data department. A “Data Specialist” picked up the phone and lived up to his title as he explained what’s up. The school with 30,700 students is a “continuation” school and the state data is a catch-all for all LAUSD continuation students. Moreover, the Data Specialist explained that California has odd dropout rate rules, such that it’s hard to actually, really, officially drop out; instead, the school of last attendance reports that a student has stopped attending, but that student can stay on the books until the student is as old as 21.

Some California districts also have a complex patchwork of rules and regulations regarding which kids go to which schools. Charters and magnets further complicate calculating accurate dropout rate information.

The Data Specialist ultimately directed us to better, more accurate data, which we included in the proposal. And now we know the details of California’s system, thanks to the call Isaac made. Without that call, we wouldn’t have had quite the right data for the schools. What I originally found would’ve worked okay, but it wouldn’t have been as detailed or accurate.

In short, online data systems are not as good as many people (and RFPs) assume. If you get data that doesn’t seem to make sense, you need to run a sanity check on that data, just like you should with Waze. Don’t die by GPS.

By the way: When you get helpful bureaucrats, be nice to them. We’ve written about the many bad bureaucrats you’ll encounter as a grant writer (“FEMA Tardiness, Grants.gov, and Dealing with Recalcitrant Bureaucrats” is one example). But the bureaucrats who do the right thing are too rare, and, when you find them, thank them. Many actually know a lot but almost never find anyone who wants to know what they know, and they can be grateful just to find an audience.

The right phone call can also reveal information beyond the purpose of the call itself. In this case, we learned that no one has a clue as to what’s really going on with dropout rates in California. Finding charter school graduation rate data is hard. The guy Isaac talked to said that there’s some data on charters somewhere on the state’s education website, but he didn’t know where. If he, as a LAUSD Data Specialist, doesn’t know and he works on this stuff all day, we’re not likely to. Charter schools aren’t important for the assignment we’re working on, but they may be important for the next one, so that bit of inside information is useful.

EDIT: Jennifer Bergeron adds, “Be prepared when you call. The Data Specialist in our district strikes back with a barrage of questions that I hadn’t even considered each time I call. He’s helpful because his questions often make me think more specifically than I would have on my own.”

Posted on Leave a comment

Is Violent Crime Going Up or Down in America? Nobody Actually Knows, But the Debate Illustrates How Grant Proposal Needs Assessments are Written

One of our past posts described how to write proposal needs assessments. A spate of recent articles on the so-called Ferguson Effect provides a good example of how proficient grant writers can use selected data and modifying words to shape a needs assessment to support whatever the project concept is.

Last week Heather Mac Donald’s Wall Street Journal editorial “Trying to Hide the Rise of Violent Crime” claimed that violent crime is rising, due to “the Ferguson Effect,” but that “progressives and media allies” have launched a campaign to deny this reality. Right on cue, the New York Times ran a front page “news” story telling grumpy New Yorkers that “Anxiety Aside, New York Sees Drop in Crime.” Both articles cite the same Brennan Center for Justice study, Crime in 2015: A Preliminary Analysis, to support their arguments.

This reminds me of the old joke about how different newspapers would report that the end of the world will happen tomorrow: the New York Times, “World Ends Tomorrow, Women and Minorities Hurt Most;” the Wall Street Journal, “World Ends Tomorrow, Markets Close Early;” and Sports Illustrated, “Series Cancelled, No World.” One can frame a set of “facts” differently, depending on one’s point of view and the argument being made.

Neither the NYT or WSJ writers actually know if violent crime is going up or down in the short term. Over the past few decades, it is clear that crime has decline enormously, but it isn’t clear what causal mechanisms might be behind that decline.

Perhaps, like Schrödinger’s cat being alive and dead at the same time to explain quantum mechanics, crime is up and down at the same, depending on who’s doing the observing and how they’re observing.

One of the challenges is that national crime data, as aggregated in the FBI Uniform Crime Reporting (UCR) system, is inherently questionable. First, police departments report these data voluntarily and many crimes are subject to intentional or unintentional miss-categorization (was it an assault or aggravated assault?) or under/over reporting, depending on how local political winds are blowing (to see one public example of this in action, consider “NYPD wants to fix stats on stolen Citi Bikes,” which describes how stealing a Citi Bike counts as a felony because each one costs more than $1,000). A less-than-honorable police chief, usually in cahoots with local pols, can make “crime rates” go up or down. Then there is the problem of using averages for data, which leads to another old joke about the guy with his head in the oven and his feet in the freezer. On average, he felt fine.

But from your perspective as a grant writer, the important question isn’t whether crime rates decline or whether “the Ferguson Effect” makes them fall. If residents of a given city/neighborhood feel vulnerable to perceived crime increases, the increases are “real to them” and can form the basis for a project concept for grants seeking. Plus, when data to prove the need is hard to come by, we sometimes ask our clients for anecdotes about the problem and add a little vignette to the needs assessment. A call to the local police department’s gang unit will always produce a great “end of the world” gang issue quote from the Sergeant in charge, while a call to the local hospital will usually yield a quote about an uptick in gun shoot victims being treated, and so on. Sometimes in proposals anecdotes can substitute for data, although this is not optimal.

Within reason and the rather vague ethical boundaries of grant seeking and writing, a good grant writer can and should pick and choose among available data to construct the needs assessment argument for funding anything the agency/community sees a need for.

For example, if we were writing a proposal for an urban police department to get more funds for community policing, we would use up or down crime rate data to demonstrate the need for a new grant. If the crime is trending down, we’d use the data to argue that the police department is doing a good job with community policing but needs some more money to do an even better job, while being able to provide technical assistance to other departments. If the crime data is trending upward, we’d argue that there’s a crisis and the grant must be made to save life and limb. If we were working for a nonprofit in the same city that wants grants for after school enrichment for at-risk youth, we’d cherry-pick the crime data to argue that a nurturing after-school setting is necessary, to keep them protected from the false allures of gangs, early risky sexual experimentation, and/or drugs.

Most grant needs assessments are written backwards. One starts with the premise for the project concept and structures the data and analysis to support the stated need. It may be hard for true believers and novice grant writers to accept, but grant writing is rarely a blue sky/visioning exercise. The funder really sets the parameters of the program. The client knows what they want the grant for. It’s the job of the grant writer to build the needs assessment by including, excluding, and/or obfuscating data. This approach works well, because most funders only know what the applicant tells them in the proposal. Some grant programs, like our old pals DOL’s YouthBuild and ED’s Talent Search, try to routinize needs assessments and confound rascally grant writers by mandating certain data sets. We’re too crafty, however, and can usually overcome such data requirements through the kind of word and data selections that Mac Donald cites in her article.

Posted on Leave a comment

Data-Based Client Tracking Services and Outcomes is a Real Challenge for Many Nonprofits

Jake recently wrote a post on the huge challenges faced by primary care provider organizations in meeting EMR Meaningful Use regulations. This got me thinking about other data collection challenges facing nonprofits. Apart from computers and the Internet,* one of few aspects of grant writing that has changed since I started writing proposals when dinosaurs walked the earth is an ever-increasing RFP/funder emphasis on data tracking to demonstrate services delivered and improved “outcomes.”

The scare quotes around “outcomes” expresses how we feel about many of them. While we’re adept at creating plausible data collection strategies in proposals, regardless of what our clients are actually doing in the real world, we know that demonstrating service delivery levels and outcomes is a major issue for certain types of human services providers. These include many faith-based organizations (FBOs)** and ethnic-specific providers, some of which have been operating since the days of Hull House. We’ve worked for several nonprofits that have been providing services for well over 100 years.

It’s not unusual for smaller FBOs and organizations serving immigrant/refugee populations to provide services in what seems, from the outside, to be a chaotic manner. But the service delivery practices are actually well-suited to their mission. A range of services might be provided to a particular individual, like help with an immigration problem, but the agency will end up helping the person’s extended family members with all manner of issues. In many ethnic communities, the concept of “family” is malleable. A nominal “uncle” or “cousin” is actually not related but hails from the same village or clan in their country of origin.

Such services are usually provided on the fly and the harried case worker, who is typically a co-religionist or from the same ethnicity, hops from client problem to problem without time or interest in database entry. Like pulling a thread on sweater, helping one person in a 30-member extended family can result in dozens of “cases” that may not be separated and documented. The family often does not want the problem documented because of cultural/religious taboos and (often justified) fear of government officials. Thus, much service delivery is provided on the down-low.

Everyone knows that New York City has dramatically changed from the bad old Death Wish days of the 1970s to a glittering metropolis of 70-story apartment buildings for the one-percenters and a well-scrubbed, tourist-focussed Times Square. What isn’t generally known is that an amazing 37% of NYC’s population is foreign-born. This percentage is increasing. NYC has more foreign-born residents than the entire City of Chicago has residents! Rapidly growing NYY immigrant groups include Orthodox Jews from the former Soviet Union, Dominicans, Asians, Central Americans, and so on. We work for many nonprofits that serve these immigrant populations; this client type usually only serves their brethren. These nonprofits have great difficulty documenting the often extraordinary services they provide—one of the main reasons they hire us is because of our ability to weave their stories into the complicated responses required by RFPs, including service and outcome metrics. Like the proverbial centipede, these nonprofits walk perfectly, as long as no one asks them how they do it.

The data capture challenge is compounded because few prospective social workers enter grad school with the idea of becoming bean counters. Like the best doctors and teachers/professors, social workers start off with the idealistic notion that they will spend most of their time helping people, not doing data entry and accounting for every minute of their day. When not extruding proposals or writing novels, Jake is a college English professor. He can attest that much of his best teaching doesn’t show up in metrics.

Many of us have had a “hero teacher” at one point and a conversation or a book recommendation might have changed your life, but will not be reflected in grades or academic honors. Similarly, a case worker who gets a tacoria to hire the “nephew” of one of her clients as a busboy to keep him out of juvenile hall might set the young man on a positive life path, even though “job placement” is not part of her official duties and will not appear in the agency’s reports.


* Which have also made the world worse, at least in some respects.

** This this does not refer to industrial-sized FBOs like Catholic Charities or the Salvation Army, which operate with bureaucratic precision.

Posted on Leave a comment

Good needs assessments tell stories: Data is cheap and everyone has it

If you only include data in your needs assessment, you don’t stand out from dozens or hundreds of other needs assessments funders read for any given RFP competition. Good needs assessments tell stories: Data is cheap and everyone has it, and almost any data can be massaged to make a given target area look bad. Most people also don’t understand statistics, which makes it pretty easily to manipulate data. Even grant reviewers who do understand statistics rarely have the time to deeply evaluate the claims made in a given proposal.*

Man is The Storytelling Animal, to borrow the title of Jonathan Gottschall’s book. Few people dislike stories and many of those who dislike stories are not neurologically normal (Oliver Sacks writes movingly of such people in his memoir On the Move). The number of people who think primarily statistically and in data terms is small, and chances are they don’t read social and human service proposals. Your reviewer is likely among the vast majority of people who like stories, whether they want to like stories or not. You should cater in your proposal to the human taste for stories.

We’re grant writers, and we tell stories in proposals for the reasons articulated here and other posts. Nonetheless, a small number of clients—probably under 5%—don’t like this method (or don’t like our stories) and tell us to take out the binding narrative and just recite data. We advise against this, but we’re like lawyers in that we tell our clients what we think is best and then do what our clients tell us to do.

RFPs sometimes ask for specific data, and, if they do, you should obviously include that data. But if you have any room to tell a story, you should tell a story about the project area and target population. Each project area is different from any other project area in ways that “20% of the project area is under 200% of the Federal Poverty Line (FPL)” does not capture. A story about urban poverty is different from a story about recent immigration or a story about the plant closing in a rural area.

In addition, think about the reviewers’ job: they read proposal after proposal. Every proposal is likely to cite similar data indicating the proposed service area has problems. How is the reviewer supposed to decide that one area with a 25% poverty rate is more deserving than some other area with a 23% poverty rate?

Good writers will know how to weave data in story, but bad writers often don’t know they’re bad writers. A good writer will also make the needs assessment internally consistent with the rest of the proposal (we’ve written before “On the Importance of Internal Consistency in Grant Proposals“). Most people think taste is entirely subjective, for bad reasons that Paul Graham knocks down in this excellent essay. Knowing whether you’re a good writer is tough because you have to know good writing to know you’re a bad writer—which means that, paradoxically, bad writers are incapable of knowing they’re bad writers (as noted in the first sentence of this paragraph).

In everyday life, people generally counter stories with other stories, rather than data, and one way to lose friends and alienate people is to tell stories that move against the narrative that someone wants to present. That’s how powerful stories are. For example, “you” could point out that Americans commonly spend more money on pets than people in the bottom billion spend on themselves. If you hear someone contemplating or executing a four- or five-figure expenditure on a surgery for their dog or cat, ruminate on how many people across the world can’t afford any surgery. The number of people who will calmly think, “Gee, it’s telling that I value the life of an animal close at hand more than a human at some remove” is quite small relative to the people who say or think, “the person saying this to me is a jerk.”

As you might imagine, I have some firsthand investigative experience in matters from the preceding paragraph. Many people acquire pets for emotional closeness and to signal their kindness and caring to others. The latter motive is drastically undercut when people are consciously reminded that many humans don’t have the resources Americans pour into animals (consider a heartrending line from “The Long Road From Sudan to America:” “Tell me, what is the work of dogs in this country?”).

Perhaps comparing expenditures on dogs versus expenditures on humans is not precisely “thinking statistically,” but it is illustrative about the importance of stories and the danger of counter-stories that disrupt the stories we desperately want to tell about ourselves. Reviewers want stories. They read plenty of data, much of it dubiously sourced and contextualized, and you should give them data too. But data without context is like bread instead of a sandwich. Make the reviewer a sandwich. She’ll appreciate it, especially given the stale diet of bread that is most grant proposals.


* Some science and technical proposals are different, but this general point is true of social and human services.

Posted on 1 Comment

The unsolvable standardized data problem and the needs assessment monster

Needs assessments tend to come in two flavors: one basically instructs the applicant to “Describe the target area and its needs,” and the applicant chooses whatever data it can come up with. For most applicants that’ll be some combination of Census data, local Consolidated Plan, data gathered by the applicant in the course of providing services, news stories and articles, and whatever else they can scavenge. Some areas have well-known local data sources; Los Angles County, for example, is divided into eight Service Planning Areas (SPAs), and the County and United Way provide most data relevant to grant writers by SPA.

The upside to this system is that applicants can use whatever data makes the service area look worse (looking worse is better because it indicates greater need). The downside is that funders will get a heterogeneous mix of data that frequently can’t be compared from proposal to proposal. And since no one has the time or energy to audit or check the data, applicants can easily fudge the numbers.

High school dropout rates are a great example of the vagaries in data work: definitions of what constitutes a high school dropout vary from district to district, and many districts have strong financial incentives to avoid calling any particular student a “dropout.” The GED situation in the U.S. makes dropout statistics even harder to understand and compare; if a student drops out at age 16 and gets a GED at 18 is he a dropout or a high school graduate? The mobility of many high-school age students makes it harder still, as does the advent of charter schools, on-line instruction and the decline of the neighborhood school in favor of open enrollment policies. There is no universal way to measure this seemingly simple number.*

The alternative to the “do whatever” system is for the funder to say: You must use System X in manner Y. The funder gives the applicant a specific source and says, “Use this source to calculate the relevant information.” For example, the last round of YouthBuild funding required the precise Census topic and table name for employment statistics. Every applicant had to use “S2301 EMPLOYMENT STATUS” and “S1701 POVERTY STATUS IN THE PAST 12 MONTHS,” per page 38 of the SGA.

The SGA writers forgot, however, that not every piece of Census data is available (or accurate) for every jurisdiction. Since I’ve done too much data work for too many places, I’ve become very familiar with the “(X)” in American Factfinder2 tables—which indicates that the requested data is not available.

In the case of YouthBuild, the SGA also specifies that dropout data must be gathered using a site called Edweek. But dropout data can’t really be standardized for the reasons that I only began to describe in the third paragraph of this post (I stopped to make sure that you don’t kill yourself from boredom, which would leave a gory mess for someone else to clean up). As local jurisdictions experiment with charter schools and online education, the data in sources like Edweek is only going to become more confusing—and less accurate.

If a YouthBuild proposal loses a few need points because of unavailable or unreliable data sources, or data sources that miss particular jurisdictions (as Edweek does) it probably won’t be funded, since an applicant needs almost a perfect score to get a YouthBuild grant. We should know, as we’ve written at least two dozen funded YouthBuild proposals over the years.

Standardized metrics from funders aren’t always good, and some people will get screwed if their projects don’t fit into a simple jurisdiction or if their jurisdiction doesn’t collect data in the same way as another jurisdiction.

As often happens at the juncture between the grant world and the real world, there isn’t an ideal way around this problem. From the perspective of funders, uniform data requirements give an illusion of fairness and equality. From the perspective of applicants trapped by particular reporting requirements, there may not be a good way to resolve the problem.

Applicants can try contacting the program officer, but that’s usually a waste of time: the program officer will just repeat the language of the RFP back to the applicant and tell the applicant to use its best judgment.

The optimal way to deal with the problem is probably to explain the situation in the proposal and offer alternative data. That might not work. Sometimes applicants just get screwed, and not in the way most people like to get screwed, and there’s little to be done about it.


* About 15 years ago, Isaac actually talked to the demographer who worked at the Department of Education on dropout data. This was in the pre-Internet days, and he just happened to get the guy who works on this stuff after multiple phone transfers. He explained why true, comprehensive dropout data is impossible to gather nationally, and some of his explanations have made it to this blog post.

No one ever talks to people who do stuff like this, and when they find an interested party they’re often eager to chat about the details of their work.

Posted on 1 Comment

“Estimate” Means “Make It Up” In the Proposal and Grant Writing Worlds

Many RFPs ask for data that simply doesn’t exist—presumably because the people writing the RFPs don’t realize how hard it is to find phantom data. But other RFP writers realize that data can be hard to find and thus offer a way out through a magic word: “estimate.”

If you see the word “estimate” in an RFP, you can mentally substitute the term “make it up.” Chances are good that no one has the numbers being sought, and, consequently, you can shoot for a reasonable guess.

Instead of the word “estimate,” you’ll sometimes find RPPs that request very specific data and particular data sources. In the most recent YouthBuild funding round, for example, the RFP says:

Using data found at http://www.edweek.org/apps/gmap/, the applicant must compare the average graduation rate across all of the cities or towns to be served with the national graduation rate of 73.4% (based on Ed Week’s latest data from the class of 2009).

Unfortunately, that mapper, while suitably wizz-bang and high-tech appearing, didn’t work for some of the jurisdictions we tried to use it on, and, as if that weren’t enough, it doesn’t drill down to the high school level. It’s quite possible and often likely that a given high school is in a severely economically distressed area embedded in a larger, more prosperous community is going to have a substantially lower graduation rate than the community at large. This problem left us with a conundrum: we could report the data as best we could and lose a lot of points, or we could report the mapper’s data and then say, “By the way, it’s not accurate, and here’s an alternative estimate based on the following data.” That at least has the potential to get some points.

We’ve found this general problem in RFPs other than YouthBuild, but I can’t find another good example off the top of my head, although HRSA New Access Point (NAP) FOAs and Carol M. White Physical Education Program (PEP) RFPs are also notorious for requesting difficult or impossible to find data.

If you don’t have raw numbers but you need to turn a proposal in, then you should estimate as best you can. This isn’t optimal, and we don’t condone making stuff up. But realize that if other people are making stuff up and you’re not, they’re going to get the grant and you’re not. Plus, if you’re having the problem finding data, there’s a decent chance everyone else is too.

Posted on Leave a comment

The Art of the Grant Proposal Abstract is Like the Art of the Newspaper Story Lead

Proposal abstracts are funny beasts: they’re supposed to summarize an entire proposal, presumably before the reader reads the proposal, and they’re often written before the writer writes the proposal. Good abstracts raise the question of whether one really needs to read the rest of the document. While RFPs sometimes provide specific abstract content—in which case you should follow the guidance—an abstract should answer the 5Ws and H: Who is going to run the program? Who is going to benefit and why? What will the program do? Where will it occur? When will it run, both in terms of services and length of the project? Why do you need to run it, as opposed to someone else? How will you run it?

Whenever I write an abstract, I ask myself the questions listed above. If I miss one, I go back and answer it. If you can answer those questions, you’ll at least have the skeleton of a complete project. If you find that you’re missing substantial chunks, you need to take time to better conceptualize the program and what you’re doing (which might itself make a useful future blog post).

As you write, start with the most relevant information. A good opening sentence should identify the name of the organization, the type of the organization if it’s not obvious (most of our clients are 501(c)3s, for example, and we always state this to make sure funders know our clients are eligible), the name of the project, what the project will do (“provide after school supportive services” is always popular), and who the project will serve. The next sentence should probably speak to why the project is needed, what it will accomplish, and and its goals. The next should probably list objectives. And so on. By the time you’ve answered all the questions above, you’ll have about a single-spaced page, which is usually as much space as is allowed for abstracts in most RFPs. At the end, since this is probably the least important part, you should mention that your organization is overseen by an independent board of directors, as well as its size, and a sentence about the experience of the Executive Director Project Director (if known).

If you’ve taken a journalism class, you’ve been told that the lead of news articles should be the most important part of the story. When someone important has died, don’t wait until the fourth paragraph to tell your busy reader what their name was and what they accomplished in life. Treat proposals the same way. For that matter, treat blog posts the same way, which we try to do.

You’ll have to find an appropriate level of detail. The easiest way to find that level is to make sure you’ve answered each of the questions above and haven’t gone any longer than one page. If you have, remove words until you’re on a single page. You don’t need to go into the level of specificity described in “Finding and Using Phantom Data in the Service Expansion in Mental Health/Substance Services, Oral Health and Comprehensive Pharmacy Services Under the Health Center Program,” but a mention of Census or local data won’t hurt, if it can be shoehorned in. Think balance.

Here’s one open secret about reading large numbers of documents at once: after you’ve read enough, you begin to make very fast assessments of that document within a couple of sentences. I don’t think I learned this fully until I started grad school in English lit at the University of Arizona. Now I’m on the other side of the desk and read student papers. Good papers usually make themselves apparent within the first page. Not every time, but often enough that it’s really unusual to experience quality whiplash.

To be sure, I read student essays closely because I care about accurate grading, and there is the occasional essay that starts out meandering and finds its point halfway through, with a strong finish. But most of the federal GS 10s, 11s, and 12s reading proposals aren’t going to care as much as they should. So first impressions count for a lot, and your abstract is your first impression. Like drawing a perfect circle, writing a perfect abstract is one of these things that seems like it should be easy but is actually quite hard. We’ve given you an outline, but it’s up to you to draw the circle.

Posted on Leave a comment

On the Subject of Crystal Balls and Magic Beans in Writing FIP, SGIG, BTOP and Other Fun-Filled Proposals

I’ve noticed a not-too-subtle change in RFPs lately—largely, I think, due to the Stimulus Bill—that requires us to drag out our trusty Crystal Ball, which is an essential tool of grant writing. Like Bullwinkle J. Moose, we gaze into our Crystal Ball and say,”Eenie meenie chili beanie, the spirits are about to speak,” as we try to answer imponderable questions. For example, our old friend the HUD Neighborhood Stabilization Program 2 (NSP2) wants:

A reasonable projection of the extent to which the market(s) in your target geography is likely to absorb abandoned and foreclosed properties through increased housing demand during the next three years, if you do not receive this funding.

How many houses will be foreclosed upon, but also absorbed, in our little slice of heaven target area in 2012? If I was smart enough to figure this out, I’d be buying just the right foreclosed houses in just the right places, instead of grant writing. People much smarter than us who were predicting in 2005 how many houses they’d need to absorb in 2009 were tremendously, catastrophically wrong, which is why we’re in this financial mess in the first place: you fundamentally can’t predict what will happen to any market, including real estate markets. Consequently, HUD’s question is so silly as to demand the Crystal Ball approach, so we nailed together available data, plastered it over with academic sounding metric mumbo jumbo, and voila! we had the precise numbers we needed. In other words, we used the S.W.A.G. method (“silly” or “scientific wild assed guess,” depending on your point of view). I have no idea why HUD would ask applicants a question that Warren Buffett (or, Jimmy Buffet for that matter, who may or may not be a cousin of Warren) could not answer, but answer we did.

You can find another example of Crystal Ball grant writing in the brand new and charmingly named Facility Investment Program (FIP), brought to us by HRSA, which are for Section 330 providers (e.g. nonprofit Community Health Centers (CHCs)). We’re writing a couple of these, which requires us to drag out the ‘ol Crystal Ball again, since the applicant is supposed to keep track of the “number of construction jobs” and “projected number of health center jobs created or retained.”

I just lean back, imagine some numbers and start typing, since there is neither a way to accurately predict any of this nor a way to verify it after project completion. HRSA is new to the game of estimating and tracking jobs, so they make it easy for us overworked grant writers and applicants by not requiring job creation certifications. Other agencies, like the Economic Development Administration (EDA), which has been about the business of handing out construction bucks for 40 years, are much craftier. For instance, the ever popular Public Works and Economic Development Program requires applicants to produce iron-clad letters from private sector partners to confirm that at least one permanent job be created for every $5,000 of assistance. We’ve written lots of funded EDA grants over the years, and the inevitable job generation issue is always the most challenging part of the application. HRSA will eventually wise up when they are unable to prove that the ephemeral construction and created/retained jobs ever existed. Alternately, they might wise up when they realize the futility of the endeavor in which they’re engaged, but I’m not betting on it.

This tendency to ask for impossible metrics is always true in grant writing, as Jake discussed in Finding and Using Phantom Data, but sometimes it’s more true than others. I ascribe the recent flurry to the Stimulus Bill because more RFPs than usual are being extruded faster than usual, resulting in even less thought going into them than usual, forcing grant writers to spend even more time pondering what our Crystal Balls might be telling us.

Since the term “Crystal Ball” began popping up whenever I scoped a new proposal with a client, I got to thinking of other shorthand ways of explaining some of the more curious aspects of the federal grant making process to the uninitiated and came up with “Magic Beans,” like Jack and the Beanstalk. We’re writing many proposals these days for businesses, who have never before applied for federal funds, for programs like the Department of Energy’s Smart Grid Investment Grant (SGIG) Program, and the Broadband Technology Opportunities Program (BTOP) of the National Telecommunications & Information Agency.

When scoping such projects, I am invariably on a conference call with a combination of marketing and engineer types. The marketing folks speak in marketing-speak platitudes (“We make the best stuff,” even if they don’t know what the stuff is) and the engineers don’t speak at all. So, to move the process along, and to get answers to the essential “what” and “how” of the project concept, I’ve taken to asking them to, in 20 words or less, describe the “Magic Beans” they will be using and what will happen when the magic beans are geminated after that long golden stream of Stimulus Bucks arcs out of Washington onto their project. This elicits a succinct reply, I can conclude the scoping call, and we can fire up the proposal extruding machine.

So use your Magic Beans to climb the federal beanstalk and reach the ultimate Golden Goose, keeping your Crystal Ball close at hand.