Tag Archives: Needs Assessment

Don’t split target areas, but some programs, like HRSA’s Rural Health Network Development (RHND) Program, encourage cherry picking

In developing a grant proposal, one of the first issues is choosing the target area (or area of focus); the needs assessment is a key component of most grant proposals—but you can’t write the needs assessment without defining the target area. Without a target area, it’s not possible to craft data into the logic argument at is at the center of all needs assessments.

To make the needs assessment as tight and compelling as possible, we recommend that the target area be contiguous, if at all possible. Still, there are times when it is a good idea to split target areas—or it’s even required by the RFP.

Some federal programs, like YouthBuild, have highly structured, specific data requirements for such items as poverty level, high school graduation rate, youth unemployment rates, etc., with minimum thresholds for getting a certain number of points. Programs like YouthBuild mean that cherry picking zip codes or Census tracts can lead to a higher threshold score.

Many federal grant programs are aimed at “rural” target areas, although different federal agencies may use different definitions of what constitutes “rural”—or they provide little guidance as to what “rural” means. For example, HRSA just issued the FY ’20 NOFOs (Notice of Funding Opportunities—HRSA-speak for RFP) for the Rural Health Network Development Planning Program and the Rural Health Network Development Program.

Applicants for RHNDP and RHND must be a “Rural Health Network Development Program.” But, “If the applicant organization’s headquarters are located in a metropolitan or urban county, that also serves or has branches in a non-metropolitan or rural county, the applicant organization is not eligible solely because of the rural areas they serve, and must meet all other eligibility requirements.” Say what? And, applicants must also use the HRSA Tool to determine rural eligibility, based on “county or street address.” This being a HRSA tool, what HRSA thinks is rural may not match what anybody living there thinks. Residents of what has historically been a farm-trade small town might be surprised to learn that HRSA thinks they’re city folks, because the county seat population is slightly above a certain threshold, or expanding ex-urban development has been close enough to skew datasets from rural to nominally suburban or even urban.

Thus, while a contiguous target area is preferred, for NHNDP and RHND, you may find yourself in the data orchard picking cherries.

In most other cases, always try to avoid describing a target composed of the Towering Oaks neighborhood on the west side of Owatonna and the Scrubby Pines neighborhood on the east side, separated by the newly gentrified downtown in between. If you have a split target area, the needs assessment is going to be unnecessarily complex and may confuse the grant reviewers. You’ll find yourself writing something like, “the 2017 flood devastated the west side, which is very low-income community of color, while the Twinkie factory has brought new jobs to the east side, which is a white, working class neighborhood.” The data tables will be hard to structure and even harder to summarize in a way that makes it seem like the end of the world (always the goal in writing needs assessments).

Try to choose target area boundaries that conform to Census designations (e.g., Census tracts, Zip Codes, cities, etc.). Avoid target area boundaries like a school district enrollment area or a health district, which generally don’t conform to Census and other common data sets.

Grant writing and cooking: Too many details or ingredients is never a good idea

As a grant writer who also likes to cook, I understand the importance of simplicity and clarity in both my vocation and avocation: too much detail can ruin the proposal, just as too many ingredients can produce a dull dish. Ten years ago, I wrote a post on the importance of using the KISS method (keep it simple, stupid—or Sally, if you don’t like the word “stupid”) in grant writing. We recently wrote an exceedingly complex state health care proposal for a large nonprofit in a Southern state, but even complex proposals should be as simple as possible—but no simpler.

As is often the case with state programs, the RFP was convoluted and required a complex needs assessment. Still, the project concept and target area were fairly straightforward. We wrote the first draft of the needs assessment in narrative form, rather than using a bunch of tables. There’s nothing intrinsically better or worse about narrative vs. tables; when the RFP is complex, we tend toward narrative form, and when the project concept and/or target area are complex, we often use more tables. For example, if the target population includes both African American and Latino substance abusers in an otherwise largely white community, we might use tables, labeling columns by ethnicity, then compare to the state. That’s hard to do in narrative form. Similarly, if the target area includes lots of counties, some of which are much more affluent than others, we might use tables to contrast the socioeconomic characteristics of the counties to the state.

Many grant reviewers also have trouble reading tables, because they don’t really understand statistics. Tables should also be followed by a narrative paragraph explaining the table anyway.

So: our client didn’t like the first draft and berated me for not using tables in the needs assessment. The customer is not always right, but, as ghostwriters, we accommodate our clients’s feedback, and I added some tables in the second draft. Our client requested more tables and lots of relatively unimportant details about their current programming, much of which wasn’t germane to the RFP questions. Including exhaustive details about current programming takes the proposal focus away from the project you’re trying to get funded, which is seldom a good idea. It’s best to provide sufficient detail to answer the 5 Ws and the H), while telling a compelling story that is responsive to the RFP.

Then, stop.

The client’s second draft edit requested yet more tables and a blizzard of additional, disconnected details. Our client disliked it the third draft. We ended up writing five drafts, instead of the usual three, and the proposal got steadily worse, not better. As chef-to-the-stars Wolfgang Puck* is said to have said, “Cooking is like painting or writing a song. Just as there are only so many notes or colors, there are only so many flavors – it’s how you combine them that sets you apart.” Attempting to use all the flavors at once usually results in a kitchen disaster.

A given section of a proposal should be as short as possible without being underdeveloped. Changes from draft to draft should also be as minimal and specific as possible.


* Jake sort-of-met Wolfgang, albeit before he was born. His mom was eight months pregnant with him when we went to Spago for dinner. Wolfgang was there in his Pillsbury Doughboy getup, and, despite not being celebrities, he couldn’t have been nicer and made a big deal out of a very pregnant woman dining at his place. I think he wanted his food to induce labor, but that didn’t happen for a couple of weeks; instead, Nate ‘n’ Al’s Deli (another celebrity hangout in Beverly Hills), was the culprit. A story for another day.

Sometimes a call will get you the data you need

This weekend I was working on a proposal that requires California education data. The California Department of Education has a decent data engine at the aptly-named DataQuest, so I was able to look the data up—but the data didn’t really make sense. One school in the target area, for example, had 30,700 students listed as attending. As anyone who has attended or seen an American high school knows, that number is absurd. Other data seemed off too, but I wasn’t sure what to do, so I included it as listed by the website and moved on with the rest of the proposal.

This morning, Isaac was editing the draft and noticed the dubious data, so he decided to call LAUSD’s data department. A “Data Specialist” picked up the phone and lived up to his title as he explained what’s up. The school with 30,700 students is a “continuation” school and the state data is a catch-all for all LAUSD continuation students. Moreover, the Data Specialist explained that California has odd dropout rate rules, such that it’s hard to actually, really, officially drop out; instead, the school of last attendance reports that a student has stopped attending, but that student can stay on the books until the student is as old as 21.

Some California districts also have a complex patchwork of rules and regulations regarding which kids go to which schools. Charters and magnets further complicate calculating accurate dropout rate information.

The Data Specialist ultimately directed us to better, more accurate data, which we included in the proposal. And now we know the details of California’s system, thanks to the call Isaac made. Without that call, we wouldn’t have had quite the right data for the schools. What I originally found would’ve worked okay, but it wouldn’t have been as detailed or accurate.

In short, online data systems are not as good as many people (and RFPs) assume. If you get data that doesn’t seem to make sense, you need to run a sanity check on that data, just like you should with Waze. Don’t die by GPS.

By the way: When you get helpful bureaucrats, be nice to them. We’ve written about the many bad bureaucrats you’ll encounter as a grant writer (“FEMA Tardiness, Grants.gov, and Dealing with Recalcitrant Bureaucrats” is one example). But the bureaucrats who do the right thing are too rare, and, when you find them, thank them. Many actually know a lot but almost never find anyone who wants to know what they know, and they can be grateful just to find an audience.

The right phone call can also reveal information beyond the purpose of the call itself. In this case, we learned that no one has a clue as to what’s really going on with dropout rates in California. Finding charter school graduation rate data is hard. The guy Isaac talked to said that there’s some data on charters somewhere on the state’s education website, but he didn’t know where. If he, as a LAUSD Data Specialist, doesn’t know and he works on this stuff all day, we’re not likely to. Charter schools aren’t important for the assignment we’re working on, but they may be important for the next one, so that bit of inside information is useful.

EDIT: Jennifer Bergeron adds, “Be prepared when you call. The Data Specialist in our district strikes back with a barrage of questions that I hadn’t even considered each time I call. He’s helpful because his questions often make me think more specifically than I would have on my own.”

Is Violent Crime Going Up or Down in America? Nobody Actually Knows, But the Debate Illustrates How Grant Proposal Needs Assessments are Written

One of our past posts described how to write proposal needs assessments. A spate of recent articles on the so-called Ferguson Effect provides a good example of how proficient grant writers can use selected data and modifying words to shape a needs assessment to support whatever the project concept is.

Last week Heather Mac Donald’s Wall Street Journal editorial “Trying to Hide the Rise of Violent Crime” claimed that violent crime is rising, due to “the Ferguson Effect,” but that “progressives and media allies” have launched a campaign to deny this reality. Right on cue, the New York Times ran a front page “news” story telling grumpy New Yorkers that “Anxiety Aside, New York Sees Drop in Crime.” Both articles cite the same Brennan Center for Justice study, Crime in 2015: A Preliminary Analysis, to support their arguments.

This reminds me of the old joke about how different newspapers would report that the end of the world will happen tomorrow: the New York Times, “World Ends Tomorrow, Women and Minorities Hurt Most;” the Wall Street Journal, “World Ends Tomorrow, Markets Close Early;” and Sports Illustrated, “Series Cancelled, No World.” One can frame a set of “facts” differently, depending on one’s point of view and the argument being made.

Neither the NYT or WSJ writers actually know if violent crime is going up or down in the short term. Over the past few decades, it is clear that crime has decline enormously, but it isn’t clear what causal mechanisms might be behind that decline.

Perhaps, like Schrödinger’s cat being alive and dead at the same time to explain quantum mechanics, crime is up and down at the same, depending on who’s doing the observing and how they’re observing.

One of the challenges is that national crime data, as aggregated in the FBI Uniform Crime Reporting (UCR) system, is inherently questionable. First, police departments report these data voluntarily and many crimes are subject to intentional or unintentional miss-categorization (was it an assault or aggravated assault?) or under/over reporting, depending on how local political winds are blowing (to see one public example of this in action, consider “NYPD wants to fix stats on stolen Citi Bikes,” which describes how stealing a Citi Bike counts as a felony because each one costs more than $1,000). A less-than-honorable police chief, usually in cahoots with local pols, can make “crime rates” go up or down. Then there is the problem of using averages for data, which leads to another old joke about the guy with his head in the oven and his feet in the freezer. On average, he felt fine.

But from your perspective as a grant writer, the important question isn’t whether crime rates decline or whether “the Ferguson Effect” makes them fall. If residents of a given city/neighborhood feel vulnerable to perceived crime increases, the increases are “real to them” and can form the basis for a project concept for grants seeking. Plus, when data to prove the need is hard to come by, we sometimes ask our clients for anecdotes about the problem and add a little vignette to the needs assessment. A call to the local police department’s gang unit will always produce a great “end of the world” gang issue quote from the Sergeant in charge, while a call to the local hospital will usually yield a quote about an uptick in gun shoot victims being treated, and so on. Sometimes in proposals anecdotes can substitute for data, although this is not optimal.

Within reason and the rather vague ethical boundaries of grant seeking and writing, a good grant writer can and should pick and choose among available data to construct the needs assessment argument for funding anything the agency/community sees a need for.

For example, if we were writing a proposal for an urban police department to get more funds for community policing, we would use up or down crime rate data to demonstrate the need for a new grant. If the crime is trending down, we’d use the data to argue that the police department is doing a good job with community policing but needs some more money to do an even better job, while being able to provide technical assistance to other departments. If the crime data is trending upward, we’d argue that there’s a crisis and the grant must be made to save life and limb. If we were working for a nonprofit in the same city that wants grants for after school enrichment for at-risk youth, we’d cherry-pick the crime data to argue that a nurturing after-school setting is necessary, to keep them protected from the false allures of gangs, early risky sexual experimentation, and/or drugs.

Most grant needs assessments are written backwards. One starts with the premise for the project concept and structures the data and analysis to support the stated need. It may be hard for true believers and novice grant writers to accept, but grant writing is rarely a blue sky/visioning exercise. The funder really sets the parameters of the program. The client knows what they want the grant for. It’s the job of the grant writer to build the needs assessment by including, excluding, and/or obfuscating data. This approach works well, because most funders only know what the applicant tells them in the proposal. Some grant programs, like our old pals DOL’s YouthBuild and ED’s Talent Search, try to routinize needs assessments and confound rascally grant writers by mandating certain data sets. We’re too crafty, however, and can usually overcome such data requirements through the kind of word and data selections that Mac Donald cites in her article.

Good needs assessments tell stories: Data is cheap and everyone has it

If you only include data in your needs assessment, you don’t stand out from dozens or hundreds of other needs assessments funders read for any given RFP competition. Good needs assessments tell stories: Data is cheap and everyone has it, and almost any data can be massaged to make a given target area look bad. Most people also don’t understand statistics, which makes it pretty easily to manipulate data. Even grant reviewers who do understand statistics rarely have the time to deeply evaluate the claims made in a given proposal.*

Man is The Storytelling Animal, to borrow the title of Jonathan Gottschall’s book. Few people dislike stories and many of those who dislike stories are not neurologically normal (Oliver Sacks writes movingly of such people in his memoir On the Move). The number of people who think primarily statistically and in data terms is small, and chances are they don’t read social and human service proposals. Your reviewer is likely among the vast majority of people who like stories, whether they want to like stories or not. You should cater in your proposal to the human taste for stories.

We’re grant writers, and we tell stories in proposals for the reasons articulated here and other posts. Nonetheless, a small number of clients—probably under 5%—don’t like this method (or don’t like our stories) and tell us to take out the binding narrative and just recite data. We advise against this, but we’re like lawyers in that we tell our clients what we think is best and then do what our clients tell us to do.

RFPs sometimes ask for specific data, and, if they do, you should obviously include that data. But if you have any room to tell a story, you should tell a story about the project area and target population. Each project area is different from any other project area in ways that “20% of the project area is under 200% of the Federal Poverty Line (FPL)” does not capture. A story about urban poverty is different from a story about recent immigration or a story about the plant closing in a rural area.

In addition, think about the reviewers’ job: they read proposal after proposal. Every proposal is likely to cite similar data indicating the proposed service area has problems. How is the reviewer supposed to decide that one area with a 25% poverty rate is more deserving than some other area with a 23% poverty rate?

Good writers will know how to weave data in story, but bad writers often don’t know they’re bad writers. A good writer will also make the needs assessment internally consistent with the rest of the proposal (we’ve written before “On the Importance of Internal Consistency in Grant Proposals“). Most people think taste is entirely subjective, for bad reasons that Paul Graham knocks down in this excellent essay. Knowing whether you’re a good writer is tough because you have to know good writing to know you’re a bad writer—which means that, paradoxically, bad writers are incapable of knowing they’re bad writers (as noted in the first sentence of this paragraph).

In everyday life, people generally counter stories with other stories, rather than data, and one way to lose friends and alienate people is to tell stories that move against the narrative that someone wants to present. That’s how powerful stories are. For example, “you” could point out that Americans commonly spend more money on pets than people in the bottom billion spend on themselves. If you hear someone contemplating or executing a four- or five-figure expenditure on a surgery for their dog or cat, ruminate on how many people across the world can’t afford any surgery. The number of people who will calmly think, “Gee, it’s telling that I value the life of an animal close at hand more than a human at some remove” is quite small relative to the people who say or think, “the person saying this to me is a jerk.”

As you might imagine, I have some firsthand investigative experience in matters from the preceding paragraph. Many people acquire pets for emotional closeness and to signal their kindness and caring to others. The latter motive is drastically undercut when people are consciously reminded that many humans don’t have the resources Americans pour into animals (consider a heartrending line from “The Long Road From Sudan to America:” “Tell me, what is the work of dogs in this country?”).

Perhaps comparing expenditures on dogs versus expenditures on humans is not precisely “thinking statistically,” but it is illustrative about the importance of stories and the danger of counter-stories that disrupt the stories we desperately want to tell about ourselves. Reviewers want stories. They read plenty of data, much of it dubiously sourced and contextualized, and you should give them data too. But data without context is like bread instead of a sandwich. Make the reviewer a sandwich. She’ll appreciate it, especially given the stale diet of bread that is most grant proposals.


* Some science and technical proposals are different, but this general point is true of social and human services.

How I track needs assessments and other grant proposal research

Bear with me. I’m about to discuss a topic that might recall horrific memories from high school history or college English, but I promise that, this time, I’m discussing research methods that are a) simple and b) relevant to your life as a grant writer in a nonprofit or other setting.

The single best way I’ve found to track grant research is described in Steven Berlin Johnson’s essay “Tool for Thought.” You can safely go read Johnson’s essay and skip the rest of this post, because it’s that good. I’m going to describe the way Johnson uses Devonthink Pro (DTP) and give some examples that show how useful this innovative program is in a grant writing context.

The problem is this: you’re a grant writer. If you’re any good, you’re probably writing/producing at least one proposal every three months, and there’s a solid chance you’re doing even more than that—especially if you have support staff to help with the production side of proposals. Every proposal is subtly different, yet each has certain commonalities. Many also require research. In the process of completing a proposal, you do the research, find a bunch of articles and maybe some books, write the needs assessment, and cite a bunch of research in (and perhaps you also cite research) in the evaluation section or elsewhere, depending on the RFP.

You finish the proposal and you turn it in.

You also know “One of the Open Secrets of Grant Writing and Grant Writers: Reading.” You see something about your area’s economy in the local newspaper. You read something about the jobs situation in The Atlantic. That book about drug prohibition—what was it called again? Right, Daniel Okrent’s Last Call—has a couple of passages you should write down because they might be useful later.

But it’s very hard to synthesize any of this material in a coherent, accessible manner. You can keep a bunch of Word documents scattered in a folder. You can develop elaborate keyword systems. Such efforts will work for a short period of time; they’ll work when you have four or five or six proposals and a couple dozen key quotes. They won’t work when you’ve been working for years and have accumulated thousands of research articles, proposals, and quotes. They won’t work when you know you need to read about prisoner re-entry but you aren’t sure if you tagged everything related to that subject with prisoner re-entry.

That’s where DTP comes in. The program’s great, powerful feature is its “See Also” function, which performs associative searches on large blocks of text to find how things might be related in subtle ways. Maybe you use the word “jail” and “drugs” without using the word “prisons” in a paragraph. If you search for “prisons,” you might not find that other material, but DTP might. This is a contrived example, but it helps show the program’s power.

Plus, chances are that if you read an article six years ago—or, hell, six months ago—you’re probably not going to remember it. Unless you’re uncommonly organized, you’re not going to find the material you might really need. DTP lets you drop the information in the program to let the program do the heavy lifting by remembering it. I don’t mean to sound like an advertisement, but DTP works surprisingly well.

Let’s keep using the example I started above and imagine that your nonprofit provides re-entry services to ex-offenders. You’ll probably end up writing the same basic explanation of how your program conducts intake, assessment, plans, service delivery, and follow-up in a myriad of different ways, depending on the funder, the page limit, and the specific questions being asked. You want a way to store that kind of information. DTP does this very well. The trick is keeping text chunks between about 50 words and 500 words, as Johnson advises. If you have more, you won’t be able to read through what you have and to find material quickly.

Consequently, a 3,000-word project services section would probably overwhelm you next time you’re looking for something similar. But a 500-word description of your agency’s intake procedure would be very manageable.

The system isn’t perfect. The most obvious flaw is in the person doing the research: you need a certain amount of discipline to copy/paste and otherwise annotate material. This might be slow at first, because DTP libraries actually get more useful when they have more material. You also need to learn how to exploit DTP to the maximum feasible extent (free proposal phrase here). But once you’ve done that, you’ll have a very fast, very accurate way of finding things that can make your grant writing life much, much easier. (Incidentally, this is also how I organize blog posts, and DTP often refers me back to earlier blog posts I would otherwise have forgotten about).

Right now, DTP is only available on OS X, but there is similar functionality in programs like Evernote or Zoho Notebook, which are cross-platform. I can’t vouch for these programs because I’ve never used them, but others online have discussed them. DTP, if used correctly, however, is a powerful argument for research-based writers using OS X.

President Obama Would Likely Make a Good Grant Writer, as He Recognizes the Value of Telling a Compelling Story

In a recent Charlie Rose interview, President Obama said this about his first term:

The mistake of my first couple of years was thinking that this job was just about getting the policy right, and that’s important [. . . .] But, you know, the nature of this office is also to tell a story to the American people that gives them a sense of unity and purpose and optimism, especially during tough times.

President Obama correctly points out that presidents often serve as the nation’s Story-Teller-in-Chief. For example, FDR’s Fireside Chats calmed a frantic nation caught up in the uncertainties of the Great Depression (he was the first president to really have access to and understand mass media) and President Reagan’s weekly radio addresses gave him a regular story telling platform that has been used by every succeeding president.

Like the presidency, grant writing at its most most basic level is nothing more than story telling. Successful presidents and successful grant writers are good story tellers, telling their audiences stories the listeners/readers want to believe in.

The latter point is critical in grant writing, as grant reviewers come to the process with preconceived notions of what they expect to read. The grant writer’s job is to craft a compelling story that meets readers expectations within the constraints of the often convoluted RFP. For example, when writing a childhood obesity prevention proposal for a poor and minority target area, it is good idea to suggest in the needs assessment that part of the problem is the lack of available fresh and nutritious food.

In other words, readers will expect a reference to food deserts, whether or not there are few grocery stores in the area (we also wrote about this process in “Two for One: Where Grants Come From, Fast Food, and the Contradictory Nature of Government Programs“). And a food desert conjures up images of want and neglect that are key elements in a “grant story.”

It’s possible that we shouldn’t trust stories nearly as much as we do; in Tyler Cowen’s TED talk on why stories make him nervous, he says that “narratives tend to be too simple,” that they tend to focus too much on good versus evil, that they tend to focus on intent instead of accident, and that they play on our cognitive biases. (A somewhat skeptical New Yorker article about TED talks even said that we might want to “feel manipulated by one more product attempting to play on our emotions,” which is what proposals should basically do.) But most grant reviewers are still looking for stories, even if the stories are simplistic.

The better a grant writer is at telling the story, the more likely she will be to write funded grants. While it is possible to get a grant without a cohesive narrative story, the odds of success increase with the quality of the tale being told. One can get lucky, but it is better to get skilled, because one always count on skill, while luck is elusive.*

When reviewers consider a stack of proposals, they will gravitate toward those that are readable and interesting while fitting within the framework of their expectations, much like you’ll gravitate towards readable and interesting novels more than those that are the opposite. Even if the need in a community is great, a disjointed proposal will generally score lower than one that captures the reader’s imagination.

In composing your narrative, make sure you weave a consistent story throughout all sections. This is easier talked about than done in large part due to the chaotic and repetitive nature of most RFPs, which are written by committee and resemble a camel more than a thoroughbred horse. We’ve written extensively about this in many contexts. Your task as a grant writer is to feed back the information requested in even the most confusing RFP, and you should do so in a way that makes all sections of the proposal hang together. You don’t want to be like President Obama in the quote above, realizing that you’ve failed to fit your policies and your community’s needs into a cohesive story.


* My favorite quote on “luck” is from Dylan’s “Idiot Wind,” “I can’t help it if I’m lucky.” Peter Thiel’s essay on luck and life is also good.

A Day in the Life of a Participant is Overrated: Focus on Data in the Neighborhood

I’ve seen a lot of proposals from clients and amateur grant writers that include something like, “A day in the life of Anthony” in their needs assessments. This is almost always a mistake, because almost anyone can include a hard-knocks anecdote, and they convey virtually no information about why your hard-knock area is different from Joe’s hard-knock area down the street, or on the other side of the tracks, or across the country. These stories are staples of newspaper accounts of hardship, but newspapers know most of their readers aren’t thinking critically about what they read and aren’t reading 100 similar stories over eight hours. Grant reviewers do.

Off the top of my head, I can’t think of any RFPs that requested day-in-the-life stories in the needs assessment. If funders wanted such stories, they’d ask for them. Since they don’t, or at best very rarely do, you should keep your needs assessment to business. And if you’re curious about how to get started, read “Writing Needs Assessments: How to Make It Seem Like the End of the World.” If you’re applying to any grant program, your application is one of many that could be funded, so you want to focus on the core purpose of the program you want to run. Giving a day in Anthony’s life isn’t going to accomplish this purpose.

Creativity is useful in many fields, but those involving government are seldom among them (as we wrote in “Never Think Outside the Box: Grant Writing is About Following the Recipe, not Creativity“). As a result, unless you see specific instructions to do otherwise, you should stick to something closer to the Project Nutria model, in which you describe the who, what, where, when, why, and how. Anything extraneous to answering those questions is wasting pages, and perhaps more importantly, wasting patience, and the precious attention that patience requires.

There’s only one plausible exception I can think of: sometimes writing about a day-in-the-life of a person receiving project services can be helpful, but again, you should probably leave those stories out unless the RFP specifically requests them. Some RFPs want a sample daily or weekly schedule, and that should suffice without a heroic story about Anthony overcoming life obstacles when he finally receives the wraparound supportive services he’s always wanted.

Grant writing is long-form, not fragmentary

In The Millions, Guy Patrick Cunningham* says:

More and more, I read in pieces. So do you. Digital media, in all its forms, is fragmentary. Even the longest stretches of text online are broken up with hyperlinks or other interactive elements (or even ads).

More and more, people also write in pieces. This isn’t intrinsically bad—it’d be funny if we argued that piecemeal writing is bad on a blog—but it is the kind of shift you should be cognizant of, because grant writing embodies long-form, deliberate writing and cohesion. Grant writing rewards people, financially and otherwise, who can sit down, focus on a long block of text, and emerge hours later with a coherent set of pages that string similar themes together, almost novelistically. Grant writing it closer to War and Peace than to, say, blogging.

I, like a lot of people, have become aware of the dangers posed by Internet distractions. And I’m more aware when I’m working on a proposal, since the temptation to open Firefox for non-research purposes is always there. It can be done in a second. And then I’m out of the zone for fifteen minutes or more. Furthermore, because of the need to write needs assessments, I can’t simply turn off the Internet altogether (as I can when I’m writing other long-form material).

Still: lots of us are being pulled in too many intellectual directions. We’re reading in “pieces,” or in fragments. But if we’re going to write effective proposals, we have to do the opposite: read in large wholes, and write that way too. The best proposals often have an almost novelistic sense of interwoven themes.

The rest of Cunningham’s essay discusses literature, but the point about the fragmentation of writing—and, by extension, attention—is one that grant writers and would-be grant writers should heed. Governments and foundations aren’t known for being in the vanguard of progress. They aren’t demanding written material in fragments. No RFP has asked that applicants respond via Twitter.

Be ready to write long and coherently.


* Which would a great name for a detective or fantasy hero.

A Question for Talmudists and Lawyers Regarding HUD’s Healthy Homes NOFA

I’m working on a HUD Healthy Homes proposal, and sections b and c of “Rating Factor 1: Capacity of the Applicant and Relevant Organizational Experience” requires responses to these sentences:

Relevant Organization Experience (6 points). Describe your recent, relevant, and successfully demonstrated experience in undertaking eligible program activities.

and:

Past Performance of the Organization (6 points). Applicants will be rated on documenting previous experience in successfully operating similar grant programs.

The exam question: What is the difference between the information being requested in each section?

Bonus section: How can you answer both while also staying within the 20-page limit for the narrative?