Posted on Leave a comment

The worse it is, the better it is: Your grant story needs to get the money

A client recently said that she was moved by a description Isaac wrote of her target area in the needs assessment of her proposal, but she asked if we could make it more hopeful. Isaac strongly discouraged her—not as a way of disparaging her neighborhood, but because describing an area as terribly depressed makes the application more likely to be funded. Consequently, a grant writer has a strong incentive to paint as bleak a word picture as possible. Paradoxically, the worse a target area is, the better it is for grant writers, and the smart strategy is to tell a story about doom and gloom that can only be alleviated by the creation of the program being proposed in the application.*

Some programs will even come and out say that the worse things are, the more points you’ll get. For example, when HUD ran Youthbuild, the RFPs did this explicitly; the 2005 RFP has a subsection of need entitled “Poverty” and gave this point breakdown:

1) Less than the national average—0 points.
(2) Equal to but less than twice the national average—1 points.
(3) Twice but less than three times the national average—3 points.
(4) Three or more times the national average—5 points.

Although most RFPs won’t give particular point values to particular numbers, reviewers will tend to do so: if they read about one application from an agency proposing to serve an area where the poverty rate is 33%, and they know the national average is around 12%, they’re more likely to want to fund that application than one serving an area with a poverty rate of, say, 15%. It

It’s important not to lie, but you can and should be selective in how you present data; there is even a funny book called How to Lie With Statistics regarding the subject, and Amazon says the book shows how the “terror [of numbers] translate[s] to blind acceptance of authority.” If you master statistics, you can make the reviewer think that visiting your target area would be terrifying and that he or she should therefore shower it with money.

In any event, the main point is that you shouldn’t lie about a target area’s characteristics: if the official unemployment rate for Dubuque is eight percent—two percent higher than the national average—it’s unethical to say it’s twelve percent, and even were that not unethical, it would also be a fantastically bad idea to make up easily verifiable statistics like unemployment rates. But if the unemployment rate is eight percent for Dubuque as a city, you could argue that it’s probably twice as high for the target population, which is less educated, lives in a worse part of town, and has members whose criminal records will likely prevent them from obtaining living-wage jobs. Then you’re telling a story and extrapolating from the data, and the worse you can make that story sound, the better off you’ll be when funding decisions are made. If you make an educated guess that, although the Dubuque unemployment rate is 8%, the rate among the target population might be closer to 25%, you’re being a smart grant writer.

You can go further: unemployment rates count only people who are actively looking for work. Those who have given up and no longer seek employment aren’t considered unemployed. Although this is well-known among economists and others who study issues around unemployment, this is the kind of fact you can argue, as a lawyer might, that understates the Dubuque unemployment rate more than it would the national rate. Suddenly, things look even grimmer than official statistics indicate and you haven’t lied. Think of yourself as a lawyer: a defense attorney isn’t trying to decide whether his or her client is guilty. The attorney’s job is make the best case for his or her client. Legal ethics prevent the lawyer from lying altogether—insert lawyer joke here—but not from arguing that the facts should be seen in the light most favorable to the lawyer’s client.

The grant writer should make the most compelling case for funding the application in question, and part of that is through writing a needs assessment that makes it seem the world is ending, as Isaac wrote about in the linked post. If Census data, for example, says that median household income is relatively high, but the percentage of high school graduates is relatively low, you should leave off household income and write about education and its link to income. If median household income is high relative to the state but low relative to the nation, construct a table comparing median household income from the target area to the national averages, which will make the situation look worse than it actually is.

We’ve discussed this issue before: for example, in Surfing the Grant Waves: How to Deal with Social and Funding Wind Shifts, we note that “In some ways, the worse things are, the better they are for nonprofits, because funding is likely to follow the broad contours of social issues.” That’s true at the level of an individual agency too. In our November Links post, we write:

* The New Republic has an article based on a Brookings Institute piece that deconstructs the small-town USA mythology regularly propagated in proposals:

But the idea that we are a nation of small towns is fundamentally incorrect. The real America isn’t found in cities or suburbs or small towns, but in the metropolitan areas or “metros” that bring all these places into economic and social union.

Think of this as a prelude to an eventual post on the subject of grantwriter as mythmaker. And if you’re interested in myth as a broader subject, see Joseph Campbell’s Myths to Live By. He’s the same guy who wrote Hero With a Thousand Faces, the book that, most famously, provided the outline for Star Wars.

A few caveats on the above are, however, in order. This post helps explain how the myth of a place is created. I’ve given one version of the myth, involving a target area being as bad as anywhere, which is usually but not always a good strategy. Some places that seem statistically and culturally average in many ways can come to represent a problem taken as a whole, and an ordinary client can become a representative sample whose problems reflect vast swaths of America, so whatever problems they have, everyone has. This style of argument became representative when suburban public schools applied for grants post-Columbine, as we describe in Surfing the Grant Waves.

Consequently, one can construct an argument for urban, rural, or suburban school districts: for the first, one argues about bad test scores, low family incomes, low educational attainment, and the like. For the second, one argues that long distances, hidden drug abuse, and the paucity of resources combine to create educational failure. For suburban districts, one argues that the malaise of contemporary society exists beneath the veneer of happy teenagers, and that when one lifts up and peers beneath the rock, a whole angry ecology seethes. Think of the various books about suburban disappointment and disillusionment: Richard Yates’ Revolutionary Road, Tom Perrotta’s Little Children, John O’Hara’s Appointment in Samarra, and much of Sinclair Lewis. Zenith City in Lewis’ Babbitt is a particularly good example. One can portray even fairly tony suburbs as caldrons of discontent.

Another factor might discourage using the blasted wastelands argument, and this objection is most often raised by city managers, mayors, and school superintendents, because they’ve often spent their careers running around trying to promote their version of Babbitt’s Zenith City, claiming that the sewage plant between the school house and police station is actually quite aesthetically pleasing and lends character to Zenith, despite the smell. If they sign an application saying that Zenith City is hell’s half acre, that its citizens are illiterate, and that meth production and distribution is the primary industry, and that application’s content hits the local paper or bigwig blogger, then the Zenith city manager, mayor, or superintendent is going to be very uncomfortable in the resulting squall. On the other hand, if the city manager, mayor, or superintendent doesn’t note the illiteracy of its citizens and meth problem, he or she might not be funded. Smart city managers, mayors, or superintendents seize the money.

Sometimes these other considerations can outweigh the grant. When I told Isaac about the post, he responded with a story** about his time working for the City of Lynwood in California, when he wrote a funded proposal to exterminate rats that the city manager declined because he’d rather have the rats than the publicity about getting rid of the rats. In addition, a few years ago, a client wanted money for a giant mammogram machine because the women in the target area were a bit on the large side from eating the local cheeses and dairy products. So we wrote about that in the needs section, but the client demanded we take it out because it might offend local sensibilities, and we couldn’t use the best argument in favor of the expensive machine. The proposal wasn’t funded—maybe not for that reason, but not using it couldn’t have helped. Your job as a grant writer is most frequently to portray blasted wastelands that the proposed program will turn into a harmonious Dionysian garden.


* All this also helps explain why a “batting average” or “track record” figure is useless regarding general purpose grant writers, as we describe in this FAQ question. We don’t know if our clients are going to come from Beverly Hills or from places where most residents haven’t graduated from high school. From a grant writing perspective, a client from the latter place might be more likely to be funded than someone from Beverly Hills.

** Virtually anytime I mention something grant-related, Isaac has a story about it.

Posted on 8 Comments

Finding and Using Phantom Data in the Service Expansion in Mental Health/Substance Services, Oral Health and Comprehensive Pharmacy Services Under the Health Center Program

RFP needs assessments will sometimes request data that aren’t readily available or just don’t exist. The question then becomes for you, the grant writer, what to do when caught between an RFP’s instructions and the reality of phantom data. When you can’t find it, you’ll have to get creative.

The Service Expansion in Mental Health/Substance Services, Oral Health and Comprehensive Pharmacy Services Under the Health Center Program (see the RFP in a Word file here) presents a good example of this problem. The narrative section for “B. Oral Health Review Criteria” begins on page 44. Under “Review Criterion 1: Need,” “Subsection 2,” the RFP says “Applicant clearly describes the target population for the proposed oral health service, including […]” which goes to “Subsection c,” which says, “The oral health status and treatment needs of the target population (e.g., caries rate, edentulism, periodontal disease, fluoridation in community water, oral cancer).” Such data are not tracked nationally. To the extent anyone keeps data, they do on a county-by-county or state-by-state basis, which can make finding the data hard—particularly for a service area that may not match up with a county or other jurisdictional boundary. But I had to answer the RFP and so looked for information about oral health status online but could find little if anything through state, county, or city websites.

If you can’t find important data online, your next step is to contact whichever public officials might be able to have it. The organization we worked for was in a state with dental health responsibility rolled into the Department of Health. Contact information was listed on the Department of Health’s website. So I called and e-mailed both the person at the state office and the person responsible for the organization’s county. Neither answered. I skipped the rest of 1.2.c until one of the state representatives replied—promptly, too!—with a Word file containing what data they had. While the information helped, and I cited what they offered, the statistics didn’t answer all of the named examples of the health status and treatment needs. I still had an unfilled data gap.

This left two choices: say the data isn’t available or write in generalities about the problems, extrapolating from what data are available. If you say the data isn’t there, you might score lower for the section compared to the people who have data or write in generalities. If you obfuscate and explain, there’s a chance you’ll receive some points. Therefore, the latter is almost always the better choice: this can be done by discussing what data you have in generalities, telling anecdotes, appealing to organization experience, and alluding to known local health problems that don’t have specific studies backing them up. This usually means saying something to the effect of, “While specific data are not available for the target area, it can be assumed that…” and then continuing. I used a combination of these strategies by citing what data I had from the state and filling the gaps with generalities.

When you find requests for data, do everything you can to seek it, and don’t be afraid to contact public officials. If you still can’t find the data, summon whatever construct as artful an explanation as you can and then move on. Chances are that if the RFP wants data so unusual that you can’t find it after a concerted effort, many other people won’t be able to find it either. You should also remember The Perils of Perfectionism: every hour you spend searching for data is an hour you’re not spending on other parts of the proposal. You shouldn’t invest hours and hours of time in finding trivial data, and after you’ve made a reasonably strong effort to search the Internet and contact whoever you can, stop and move on. This is especially important because you might be searching for data that simply does not exist, in which case it will never be found and you’re wasting time trying to find it. While it is fun to search for the last unicorn, you are most likely to find a horse with a cardboard horn strapped to her forehead than a mythical and elusive creature.

Posted on 1 Comment

Self-Esteem—What is it good for? Absolutely Nothing

Roberta Stevens commented on “Writing Needs Assessments: How to Make it Seem Like the End of the World” by saying she was “having trouble finding statistics on low self esteem in girls ages 12-19.” This got me thinking about the pointlessness of “self-esteem” as a metric in grant proposals. A simple Google search for ‘“self-esteem” girls studies reports’ yielded a boatload of studies, but if you look closely at them, it is apparent that most are based on “self-reports,” which is another way of saying that researchers asked the little darlings how they feel.

When my youngest son was in middle school, he was subjected to endless navel gazing surveys and routinely reported confidentially that he had carried machine guns to school, smoked crack regularly and started having sex at age seven. In short, he thought it was fun to tweak the authority figures and my guess is that many other young people do too when confronted by earnest researchers asking probing questions.

Although such studies often reveal somewhat dubious alleged gender differences based on self-esteem, I have yet to see any self-esteem data that correlated with meaningful outcomes for young people. Perhaps this is obvious, since self-esteem is such a poor indicator of anything in the real world, given that Stalin appears to have had plenty of self-esteem, even if his moral compass was off target. Arguably our best President, Abraham Lincoln, was by most accounts wracked with self-doubt and low self-esteem, while more recent Presidents, Lyndon Johnson and Richard Nixon, both with questionable presidencies, did not seem short in the self-esteem department.

If I use self-esteem in a needs assessment for a supportive service program for teenage girls, I would find appropriately disturbing statistics (e.g., the pregnancy rate is two times the state rate, the drop out rate among teenage girls has increased by 20%, etc.) and “expert” quotes (“we’ve seen a rise in suicide ideation among our young women clients,” says Carmella, Kumquat, MSW, Mental Health Services Director) to paint a suitably depressing picture and then top it off with the ever popular statement such as, “Given these disappointing indicators, the organization knows anecdotally from its 200 years of experience in delivering youth services, that targeted young women exhibit extremely low self-esteem, which contributes to their challenges in achieving long-term self-sufficiency.” I know this is a nauseating sentence, but it is fairly typical of most grant proposals and is why proposals should never be read just after eating lunch.

So, to paraphrase Edwin Star, “Self-esteem, what is it good for? / Absolutely nothing.”

(In the context of gangs, Jake has also commented on suspect or twisted needs indicators .)


EDIT: A more recent post, Self-Efficacy—Oops, There Goes Another Rubber Tree Plant, takes up the issue of finding a metric more valuable than self-esteem for both grant writers and program participants.