Posted on Leave a comment

Seliger + Associates enters grant writing oral history (or something like that)

Seliger + Associates has been toiling away in the grant writing salt mines for over two decades, and last week we got hired to review and edit a new client’s draft proposal for a federal program we’ve been writing for years.* They emailed their draft and we were delighted to see that it’s actually based on a proposal we wrote for some forgotten client ten to fifteen years ago. While the proposal has morphed over the years, we could easily find passages I likely wrote when Jake was in middle school.

We’ve encountered sections of our old proposals before, but this example is particularly obvious. The draft was also written to an archaic version of the RFP, so it included ideas that were important many years ago but that have since been removed or de-emphasized. We of course fixed those issues, along with others, but we also left some our our golden historic phrases intact for the ages. This version will undoubtedly also linger on into the future.

We’re part of what might best termed the “oral history” of grant writing. We’re the Homer of the grant world, which is a particularly apt comparison because “Homer” may have been more than one person. For the first ten years or so of being in business, our drafts were most sent by fax, but we sent final files on CDs. For the past decade we’ve been emailing Word versions of all narratives and Excel budgets. Our proposals have probably been traded by nonprofits all over the country like Magic: The Gathering Cards.** Still, unlike some other grant writers who will remain nameless, we never post or sell our proposals. But it seems that the digital age has caught up with us anyway.

In some ways, seeing shades of our old proposals makes me feel proud, as our impact will likely last as long as there are RFPs—which is another way of saying forever.

We don’t know what strange ways brought the proposal we wrote to our current client. We’ve had hundreds of clients and written many more proposals of all stripes, and even if we wanted to trace its lineage we couldn’t.

As we’ve written before, grant writing at its most basic level is story telling. Now our stories have assumed a digital afterlife of their own. While Titanic is not my favorite film or movie theme, I’ll paraphrase Celine Dion, as it does seem that . . .”our proposal words will go on and on.”


* Faithful readers will probably know which program I’m discussing, but we’ll keep it on the down low to protect the guilty and and punish the innocent.

** When Jake was about 11, and just before his unfortunate discovery of video games, he was a huge Magic player and was always after me to buy yet more cards. As I recall, he and his little pals endlessly traded Magic cards for “value” that completely eluded me, a classic clueless dad. Eventually Jake grew up and lost interest, at which point the value of the cards became zero for him.

Posted on Leave a comment

Good needs assessments tell stories: Data is cheap and everyone has it

If you only include data in your needs assessment, you don’t stand out from dozens or hundreds of other needs assessments funders read for any given RFP competition. Good needs assessments tell stories: Data is cheap and everyone has it, and almost any data can be massaged to make a given target area look bad. Most people also don’t understand statistics, which makes it pretty easily to manipulate data. Even grant reviewers who do understand statistics rarely have the time to deeply evaluate the claims made in a given proposal.*

Man is The Storytelling Animal, to borrow the title of Jonathan Gottschall’s book. Few people dislike stories and many of those who dislike stories are not neurologically normal (Oliver Sacks writes movingly of such people in his memoir On the Move). The number of people who think primarily statistically and in data terms is small, and chances are they don’t read social and human service proposals. Your reviewer is likely among the vast majority of people who like stories, whether they want to like stories or not. You should cater in your proposal to the human taste for stories.

We’re grant writers, and we tell stories in proposals for the reasons articulated here and other posts. Nonetheless, a small number of clients—probably under 5%—don’t like this method (or don’t like our stories) and tell us to take out the binding narrative and just recite data. We advise against this, but we’re like lawyers in that we tell our clients what we think is best and then do what our clients tell us to do.

RFPs sometimes ask for specific data, and, if they do, you should obviously include that data. But if you have any room to tell a story, you should tell a story about the project area and target population. Each project area is different from any other project area in ways that “20% of the project area is under 200% of the Federal Poverty Line (FPL)” does not capture. A story about urban poverty is different from a story about recent immigration or a story about the plant closing in a rural area.

In addition, think about the reviewers’ job: they read proposal after proposal. Every proposal is likely to cite similar data indicating the proposed service area has problems. How is the reviewer supposed to decide that one area with a 25% poverty rate is more deserving than some other area with a 23% poverty rate?

Good writers will know how to weave data in story, but bad writers often don’t know they’re bad writers. A good writer will also make the needs assessment internally consistent with the rest of the proposal (we’ve written before “On the Importance of Internal Consistency in Grant Proposals“). Most people think taste is entirely subjective, for bad reasons that Paul Graham knocks down in this excellent essay. Knowing whether you’re a good writer is tough because you have to know good writing to know you’re a bad writer—which means that, paradoxically, bad writers are incapable of knowing they’re bad writers (as noted in the first sentence of this paragraph).

In everyday life, people generally counter stories with other stories, rather than data, and one way to lose friends and alienate people is to tell stories that move against the narrative that someone wants to present. That’s how powerful stories are. For example, “you” could point out that Americans commonly spend more money on pets than people in the bottom billion spend on themselves. If you hear someone contemplating or executing a four- or five-figure expenditure on a surgery for their dog or cat, ruminate on how many people across the world can’t afford any surgery. The number of people who will calmly think, “Gee, it’s telling that I value the life of an animal close at hand more than a human at some remove” is quite small relative to the people who say or think, “the person saying this to me is a jerk.”

As you might imagine, I have some firsthand investigative experience in matters from the preceding paragraph. Many people acquire pets for emotional closeness and to signal their kindness and caring to others. The latter motive is drastically undercut when people are consciously reminded that many humans don’t have the resources Americans pour into animals (consider a heartrending line from “The Long Road From Sudan to America:” “Tell me, what is the work of dogs in this country?”).

Perhaps comparing expenditures on dogs versus expenditures on humans is not precisely “thinking statistically,” but it is illustrative about the importance of stories and the danger of counter-stories that disrupt the stories we desperately want to tell about ourselves. Reviewers want stories. They read plenty of data, much of it dubiously sourced and contextualized, and you should give them data too. But data without context is like bread instead of a sandwich. Make the reviewer a sandwich. She’ll appreciate it, especially given the stale diet of bread that is most grant proposals.


* Some science and technical proposals are different, but this general point is true of social and human services.

Posted on Leave a comment

No Calls, No Bother: “Maker’s Schedule, Manager’s Schedule” and the Grant Writer’s Work

In “Maker’s Schedule, Manager’s Schedule” Paul Graham writes

There are two types of schedule, which I’ll call the manager’s schedule and the maker’s schedule. The manager’s schedule is for bosses. It’s embodied in the traditional appointment book, with each day cut into one hour intervals. You can block off several hours for a single task if you need to, but by default you change what you’re doing every hour. [. . .]

But there’s another way of using time that’s common among people who make things, like programmers and writers. They generally prefer to use time in units of half a day at least. You can’t write or program well in units of an hour. That’s barely enough time to get started.

People who make things are often experiencing flow, which is sometimes called “being in the zone.” It’s a state of singular concentration familiar to writers and other makers. Managers may experience it too, but in different ways, and their flow emerges from talking to another person, or from productive meetings—but a tangible work product rarely emerges from those meetings.

In part because of the maker’s schedule and the manager’s schedule, we try not to bother our clients. When we write proposals, we schedule a single scoping call, which is a little bit like being interviewed by a reporter. During that call we attempt to answer the 5Ws and H—who, what, where, when, why, and how—and hash out anything unique to a particular RFP or client. We ask our clients to send any background information they might have, like old proposals or reports. And then our clients usually don’t hear from us until the first draft of the proposal is finished.

Just because we’re not noisy doesn’t mean we’re not busy, however. We’re writing during that quiet period. Writing works best when it’s relatively uninterrupted. If you’re a part-time grant writer in an organization, you may be used to phone calls and emails and crises and all manner of other distractions that hit you at least once an hour. In those conditions you’ll rarely if ever reach a consistent state of flow. These problems have scuppered more than one proposal, as we know from candid conversations with clients’s on-staff grant writers.*

We only have a single scoping call or scoping meeting because we know we’re better off writing the best proposal we can given what we know than we are attempting to call our clients every hour when we don’t know something. Our methods have been developed over decades of practice. They work.

Writing isn’t the only field with flow issues. Software famously has this problem too, because in a way every software project is a novel endeavor. Software is closer to research than to manufacturing. Once you have a manufacturing process, you can figure out the critical path, the flow of materials, and about how many widgets you can make in a given period time. That’s not true of software—or, in many cases writing. This list of famous, failed software projects should humble anyone attempting such a project.

Ensuring that a project, like a proposal, gets done on time is simply quite hard (which is part of the reason we’re in business: we solve that problem). But it can be done, and we work to do it, and one way is by ensuring that we don’t waste our clients’s time.

We don’t call ourselves artists, at least in this domain, but, as Joe Fassler says, “Great Artists Need Solitude.” Writers need solitude. The best work gets done in chunks of undisturbed time—for Neal Stephenson, those chunks need to be about four hours, which sounds pretty close to the way we write.


* People are often surprised that we get hired by organizations that have full-time grant writers already on staff. But this is actually quite common.

Posted on 1 Comment

Shorter Deadlines Are Sometimes Better for Organizations and Grant Writers

It seems intuitive that having more time to complete a task would result in a better final product. But in grant writing—and other fields—that’s sometimes not the case.

The reason is simple: more time sometimes allows organizations to edit their proposals into oblivion or let everyone contribute their “ideas,” no matter poorly conceived or how poorly the ideas fit the proposal. We’ve been emphasizing these issues a lot recently, in posts like “On the Importance of Internal Consistency in Grant Proposals” and “The Curse of Knowledge in the Proposal World,” because consistency is incredibly important yet hard to describe concisely. Good proposals, like good novels, tend to emerge from a single mind that is weaving a single narrative thread.

The same person who writes the initial proposal should ideally then be in charge of wrangling all comments from all other parties. This isn’t always possible because the grant writer is often under-appreciated and has to accept conflicting orders from various stakeholders elsewhere in the organization. One advantage we have as consultants is that we can impose internal deadlines for returning a single set comments on a draft proposal on clients that otherwise might tend towards disorder. Sometimes that also makes clients unhappy, but the systems we’ve developed are in place to improve the final work product and increase the likelihood of the client being funded.

Short deadlines, by their nature, tend to reduce the ability of everyone to pour their ideas into a proposal, or for a proposal to be re-written once or repeatedly by committee. If the organization is sufficiently functional to stay focused on getting the proposal submitted, regardless of what else may be occurring, the proposal may turn out better because it’ll be more consistent and decision makers won’t have too much time to futz with it.

You’ve probably heard the cliché, “Too many cooks spoil the broth.” It exists for a reason. You may also have seen baseball games in which delays let the coach have enough thinking time to think himself into a bad pitcher or hitter change. Although every writer needs at least one editor, a single person should be responsible for a proposal and should also have the authority and knowledge necessary to say “No” when needed.

Posted on 2 Comments

On the importance of internal consistency in grant proposals

Grant writing, like most artistic pursuits, is an essentially solitary endeavor. No matter how many preliminary group-think planning meetings or discussions occur, eventually one person will face a blank monitor and contemplate an often cryptic, convoluted RFP.*

As a consequence of being written by a single person, most proposal first drafts are fairly internally consistent. A grant writer is unlikely to call the person in charge of the proposed initiative, “Program Director” in one section and “Project Director” in another, or randomly use client/participant/student interchangeably. Inconsistencies, however, tend to emerge as the proposal goes through various drafts to get to the submission draft.

Let’s say three readers edit the first draft: Joe doesn’t like chocolate, MaryLou doesn’t like vanilla and Sally doesn’t like ice cream. Joe’s edits might change Program Director to Project Coordinator for some arcane reason, but only in some sections, while the other readers may make similar changes, some of which might be valid and some capricious. As the proposal goes through the remaining drafts, these inconsistencies will become embedded and confusing, unless the grant writer is very careful to maintain internal consistency; a change on page 6 has to be made on pages 12, 15, and 34. Even if the grant writer is careful, as she revises the drafts, it will become harder and harder for her to spot these problems because earlier drafts become entangled with later ones.

Inconsistencies often crop up in project staffing, for example. Most proposals have some combination of threaded discussions of what the project staff is going to do, along with a staffing plan (usually includes summary job descriptions), organization chart, line-item budget, budget narrative, and/or attached actual position descriptions. If the staffing plan lists three positions, but the budget includes four and the budget narrative five, it’s “Houston, we have a problem time.” To a funding agency reviewer, these inconsistencies will stand out like neon signs, even if the grant writer can no longer see them. While some inconsistencies probably don’t matter much, some could easily be “sink the ship” errors.

In our consulting practice, we typically only prepare three drafts: the first, second and final or submission draft. We also provide clients with drafts in both Word and Acrobat, and we strongly suggest that only the Acrobat version be given to the reader list. This enables our contact person to return a single revised Word version and control the internal editing process.

But, like many of our suggestions, this is often ignored, so the final edited version we get from clients often has these various consistency problems in terms of both language and formatting. We overcome these by having the final draft flyspecked by one of our team members who has not closely read previous drafts. We also carefully compare the final draft to RFP requirements with respect to section headers, outline format, required attachments and so on. Nonetheless, we aren’t perfect and sometimes a sufficiently altered proposal can’t be effectively made consistent again.

Here’s another technique we often suggest to our clients to ferret out inconsistencies in language and formatting in final drafts: give the draft to someone who has good reading/writing skills but has never read the proposal and has no direct knowledge of the project concept, the services provided by your agency, or the RFP. For this person ignorance is strength. A retired uncle or aunt who taught high school English is perfect for this role. Such a reader will not only spot the inconsistencies, but will also likely find logic errors and so on.

Still, it’s important to complete this process well before the deadline. The closer the deadline looms, the more you risk either blowing the deadline or creating worse problems for yourself. A day or two before the deadline is a poor choice for making serious changes—which we’ve seen numerous clients attempt, and drastic last-minute changes rarely turn out well.


* This assumes you haven’t made the mistake of parceling out different proposal sections for different people to write—as is said, a camel, not a horse, will inevitably result from this dubious practice.

Posted on Leave a comment

The Curse of Knowledge in the Proposal World

Being too knowledgeable can actually hurt your proposal.

At first glance that seems wrong: Isn’t knowing more better than knowing less? Does anyone want to hire a web developer who says he doesn’t know how databases work? In most situations these questions have obvious answers, but in writing knowing too much can be a hindrance rather than a help because you’ll assume that the reader has information the reader doesn’t actually have.

You’ll know so much that you’ll assume others know what you do. You’re a wizard. But non-wizards haven’t spent years studying your arcane subject, and they need extra mental scaffolding to understand it. This problem is even worse when you’re on a team of wizards, and you’re surrounded by other technical experts. You’ll begin to subconsciously think that everyone knows what you know (certain fields, like medicine, seem particularly subject to this problem).

We’ve read numerous proposals, provided by clients, that are riddled with internal acronyms, knowledge, and arcane systems. Readers don’t automatically know that your CBO will interface with the BSSG to commit to TCO improvements. Readers won’t automatically know that the HemiSystem is clearly better than the Vaso Company’s product. Readers need to build up to knowledge of the BSSG and HemiSystem.

The vast majority of proposals are read by non-wizards, and even peer-reviewed proposals are often also read by non-peers. “Peer” can be surprisingly vague (this is especially dangerous in writing National Institutes of Health (NIH), National Science Foundation (NSF), or Small Business Research and Investment (SBIR) proposals). A technical “peer: may still be far enough away from a particular problem area to not know the nuances of the specific proposal topic area. It’s often better to err on more clarifying explanation rather than less.

There is no easy cure for this problem. Awareness helps—hence this post; we’re trying to make you a better writer and improve your life—but isn’t perfect. Feedback helps but also isn’t perfect. Wizards also tend to ignore the value of non-wizard feedback—if you’re not part of the guild, you don’t know enough to contribute—and that can create an echo chamber.

One strategy: give a proposal written by a wizard to an intelligent non-wizard and ask them to read it and mark confusing places, or stop when they stop understanding. If the reader stops midway through the abstract, there’s likely a problem. Another strategy is to have a non-subject area expert write the proposal—that, in essence, is what we do, and what many journalists do. We’re not experts in orthopedic surgery, or construction skills, or medical device development (to name three subjects we’ve worked in) and we don’t pretend to be. But we are experts at organizing information and telling stories. We’re experts in acquiring specialized knowledge and organizing that knowledge. That is itself a distinct skill and it’s one we have.

Even we can be susceptible to the curse of knowledge, however, and we watch for it in our proposals. You should watch for it in yours, and you should read experts at translating specialized knowledge into the public term’s. Physicist Brian Greene is famously good at this, and books like The Elegant Universe: Superstrings, Hidden Dimensions, and the Quest for the Ultimate Theory are excellent.

We’re also not the first to notice the curse-of-knowledge problem: Steven Pinker discusses it in The Sense of Style: The Thinking Person’s Guide to Writing in the 21st Century. Should you meet anyone cursed with knowledge, give them Pinker’s book. But although you can give a book, you can’t force someone to learn. That has to come from within. People who don’t read aren’t committed to knowledge. That’s just the way it is.

Posted on 1 Comment

Cultural Sensitivity, Cultural Insensitivity, and the “Big Bootie” Problem in Grant Writing

This post is going to start in an incredibly boring fashion and then twist; first, the boring part: virtually every human and social service proposal, regardless of the target population, should at least nod to cultural sensitivity and related matters. Many RFPs specifically require applicants to address how project staff will be trained in cultural sensitivity and diversity to provide what is usually termed “culturally appropriate and specific services.”

But sometimes the impulse towards cultural sensitivity can go terribly wrong.

For one example of “cultural sensitivity gone wrong” check out “‘Bootie’ problem at CMS? Mom says offensive question went too far” or “Wrong Answer To The Wrong Question About A Big Bootie On High School Biology Test.” Both concern a question on a high school test about genetics:

“LaShamanda has a heterozygous big bootie, the dominant trait. Her man Fontavius has a small bootie which is recessive. They get married and have a baby named LaPrincess,” the biology assignment prompts students.

The assignment then continues to ask, “What is the probability that LaPrincess will inherit her mama’s big bootie?”

Here at Seliger + Associates, we don’t have any more details about the story apart from what we see in the media, and it would not shock us if this story is a hoax or if there is more going on than what appears in these news blurbs.* The more you know about the media the more skeptical you should be of any given story.

Nonetheless, let’s take this at face value and attempt to imagine what might have been going through the teacher’s mind: first off, the teacher said the worksheet “had been passed down to her by other teachers,” which indicates that she might not have looked closely at it. Since I’ve taught plenty of college classes, I can vouch for an instructor’s desire to use what’s been tested and teach efficiently. Secondly, though, she’s probably been hearing discourse and through mandated professional development about cultural sensitivity and incorporating non-dominant or non-Anglo cultures into her teaching for her entire career.

We’re not trying to defend the teacher, but we are saying that her thinking may be understandable, even if the execution is misplaced. Her conundrum, if it exists, can be stated simply: Where does cultural sensitivity end and cultural appropriation or cultural insensitivity begin?

We have no idea, and neither do most people, because each case has to be judged one by one. We don’t have a pithy answer to this conundrum. The need for introducing concepts around cultural sensitivity is real, but so is the danger of being offensive, either inadvertently or, conceivably, advertently. In the proposal world, the easiest way to avoid this problem is by praising and promising cultural sensitivity training without specifying what that will mean on the ground, which can help grant writers avoid obvious gaffes. As a grant writer you don’t want to introduce a big bootie-style problem into your proposal, but you also can’t ignore funders’s requirements. These requirements can sometimes lead to mistakes like the one described in the news articles above.


* Which often happens; it’s not uncommon for contemporary novels, like Tom Perrotta’s Election, Anite Shreve’s Testimony, or Tom Wolfe’s The Bonfire of the Vanities to exploit the gap between shallow media understanding of an event and deeper understanding of an event.

Posted on 4 Comments

Speech Codes, Microagression and Grant Writing: Words that Shouldn’t (and Should) be Used in Proposals

One of the most unfortunate changes in the academic world since I left the warm bosom of the University of Minnesota in the Great Frozen North over 40 years ago is the rise of so-called “hate speech codes.” These Orwellean codes purport to regulate speech to prevent “hate speech,” as defined by the local campus Thought Police, and thus avoid dreaded microagressions. This is pretty rich for someone who started at the U of M in 1968 during the height of campus free speech demonstrations regarding an essay, the title of which—”The Student as ________“— I can no longer put in print because of changing speech mores.*

George Orwell presaged the decline of real meaning in his 1948 essay “Politics and the English Language,” which is a must read for any grant writer.

In grant writing, there’s a strict, albeit unwritten, speech code that budding grant writers would be wise to learn. Here are some words and concepts to avoid—or use—in grant writing and why:

  • Bureaucracy: The bureaucrats who read typically read and score proposals might be offended if they’re reminded that they actually are bureaucrats and not saintly givers of OPM (other people’s money). Jake likes the word “bureaucrat,” which I find very annoying when I have to edit it out. By the way: don’t use the term “OPM,” either!
  • Victim: Never characterize the recipient of whatever human service you’re writing about as a “victim,” which is now seen as pejorative. For example, a homeless person is “experiencing” homelessness and a drug addled teen is “living with the scourge of addiction.” They are not victims of their situation.
  • Ex-offenders: Never refer to a formerly incarcerated person as an ex-offender. The term now in use is “returning citizen.” To me it sounds like they got back from a cruise, but who am I to blow against the wind?
  • Win: If someone is characterized as “winning,” this implies a loser—and we can’t have losers in grant writing. Like grade school soccer in some precincts, all players are winners and get a trophy (dodge ball is out). You can, however, use the hoary, but acceptable “win-win”, or even better “win-win-win” phraseology to summarize the wonderful world that will exist in the afterglow of project funding and implementation.
  • Guardian: “Guardian” is a legal term and should be avoided. Instead, when writing about at-risk children and youth, it’s best to always refer to “parents/caregivers” rather than just “parents,” since many of them live in the ever popular termed “single-parent household.” Parents/caregivers implies an extended “family constellation” (another great grant phrase that should be used) that is somehow looking after the interests of the young person, even though dad’s disappeared, mom’s incarcerated, but will soon be a returning citizen, and grandma’s “living with a disability.”
  • Disabled, and So On: No one is disabled. Instead, as above, they’re “living with a disability” or even better, “living with a condition of disability”. Why use four words when six will do? They can also be “differently abled.” Similarly, no one is blind, they “live with a visual impairment,” no one is deaf, they “live with a hearing impairment.”
  • Infected: People are not infected with HIV, but are rather “HIV positive,” or in shorthand, “HIV+”. This puts a positive spin on things, don’t you think? Or, I suppose you could try, “person of HIVness.” Phrases like “living on the down low” are acceptable, however. So is MSM (“men who have sex with men.”)
  • Of Color: Shorthand for minority residents is “residents of color.” Obviously, don’t say it the other way around!
  • Ethnic Capitalization: In a laundry list of ethnic groups living in a target area, do this: African American, Hispanic or Latino (Latino generally preferred in CA and the southwest), Asian and white.
  • Partnership/Collaboration: Every project is going to be implemented by a partnership or collaborative, even if it isn’t. Usually it isn’t.
  • She/he: It’s always “she/he” and “her/his,” not the other way around. Draw your own conclusion.
  • LBGTQ: The is for “Q” for “questioning” or “queer,” depending on your point of view, and has recently been added to the catchall, LBGT, for sexual orientation/gender identity. The whole gender identity issue may throw my “she/he” convention into a cocked hat. Maybe, I should start using “she/he/not sure” instead.
  • Poor: No one is never poor; a person or family might “economically disadvantaged” or “low income.” Describing the world in terms of “advantage” and “disadvantage” is a good; contrasting “economically disadvantaged residents” with their “affluent, privileged” neighbors is particularly good.
  • Career Ladder: Any job training or education effort should lead to a “career-ladder job” with “living-wage potential.”

I could go on, as there are lots more examples, but, I of course, have to finish the proposal draft I’m working on. This list may be updated as we think of more examples.


* Camille Paglia and I are the last people alive who remember the real ’60s left, which bears only a passing resemblance to and shared name with the current left:

My essays often address the impasse in contemporary politics between ‘liberal’ and ‘conservative,’ a polarity I contend lost its meaning after the Sixties. There should be an examination of the way Sixties innovators were openly hostile to the establishment liberals of the time. In today’s impoverished dialogue, critiques of liberalism are often naively labeled ‘conservative,’ as if twenty-five hundred years of Western intellectual history presented no other alternatives.

Posted on 1 Comment

The unsolvable standardized data problem and the needs assessment monster

Needs assessments tend to come in two flavors: one basically instructs the applicant to “Describe the target area and its needs,” and the applicant chooses whatever data it can come up with. For most applicants that’ll be some combination of Census data, local Consolidated Plan, data gathered by the applicant in the course of providing services, news stories and articles, and whatever else they can scavenge. Some areas have well-known local data sources; Los Angles County, for example, is divided into eight Service Planning Areas (SPAs), and the County and United Way provide most data relevant to grant writers by SPA.

The upside to this system is that applicants can use whatever data makes the service area look worse (looking worse is better because it indicates greater need). The downside is that funders will get a heterogeneous mix of data that frequently can’t be compared from proposal to proposal. And since no one has the time or energy to audit or check the data, applicants can easily fudge the numbers.

High school dropout rates are a great example of the vagaries in data work: definitions of what constitutes a high school dropout vary from district to district, and many districts have strong financial incentives to avoid calling any particular student a “dropout.” The GED situation in the U.S. makes dropout statistics even harder to understand and compare; if a student drops out at age 16 and gets a GED at 18 is he a dropout or a high school graduate? The mobility of many high-school age students makes it harder still, as does the advent of charter schools, on-line instruction and the decline of the neighborhood school in favor of open enrollment policies. There is no universal way to measure this seemingly simple number.*

The alternative to the “do whatever” system is for the funder to say: You must use System X in manner Y. The funder gives the applicant a specific source and says, “Use this source to calculate the relevant information.” For example, the last round of YouthBuild funding required the precise Census topic and table name for employment statistics. Every applicant had to use “S2301 EMPLOYMENT STATUS” and “S1701 POVERTY STATUS IN THE PAST 12 MONTHS,” per page 38 of the SGA.

The SGA writers forgot, however, that not every piece of Census data is available (or accurate) for every jurisdiction. Since I’ve done too much data work for too many places, I’ve become very familiar with the “(X)” in American Factfinder2 tables—which indicates that the requested data is not available.

In the case of YouthBuild, the SGA also specifies that dropout data must be gathered using a site called Edweek. But dropout data can’t really be standardized for the reasons that I only began to describe in the third paragraph of this post (I stopped to make sure that you don’t kill yourself from boredom, which would leave a gory mess for someone else to clean up). As local jurisdictions experiment with charter schools and online education, the data in sources like Edweek is only going to become more confusing—and less accurate.

If a YouthBuild proposal loses a few need points because of unavailable or unreliable data sources, or data sources that miss particular jurisdictions (as Edweek does) it probably won’t be funded, since an applicant needs almost a perfect score to get a YouthBuild grant. We should know, as we’ve written at least two dozen funded YouthBuild proposals over the years.

Standardized metrics from funders aren’t always good, and some people will get screwed if their projects don’t fit into a simple jurisdiction or if their jurisdiction doesn’t collect data in the same way as another jurisdiction.

As often happens at the juncture between the grant world and the real world, there isn’t an ideal way around this problem. From the perspective of funders, uniform data requirements give an illusion of fairness and equality. From the perspective of applicants trapped by particular reporting requirements, there may not be a good way to resolve the problem.

Applicants can try contacting the program officer, but that’s usually a waste of time: the program officer will just repeat the language of the RFP back to the applicant and tell the applicant to use its best judgment.

The optimal way to deal with the problem is probably to explain the situation in the proposal and offer alternative data. That might not work. Sometimes applicants just get screwed, and not in the way most people like to get screwed, and there’s little to be done about it.


* About 15 years ago, Isaac actually talked to the demographer who worked at the Department of Education on dropout data. This was in the pre-Internet days, and he just happened to get the guy who works on this stuff after multiple phone transfers. He explained why true, comprehensive dropout data is impossible to gather nationally, and some of his explanations have made it to this blog post.

No one ever talks to people who do stuff like this, and when they find an interested party they’re often eager to chat about the details of their work.

Posted on 3 Comments

More RFP Looney Tunes, This Time from the Centers for Medicare & Medicaid Services Health Care Innovation Award Program

Having been a grant writer since before the flood, I should not be flummoxed by a hopelessly inept RFP. I wasn’t flummoxed by the recently completed Centers for Medicare & Medicaid Services (CMS) Health Care Innovation (HCI) Awards Round Two process, but I was impressed by the sheer madness of it.

This Funding Opportunity Announcement (FOA, which is CMS-speak for “RFP”) was exceptionally obtuse and convoluted. I should expect this from an agency that uses 140,000 treatment reimbursement codes, apparently including nine codes for injuries caused by turkeys.

The HCI FOA was 41 single-spaced pages, which is fairly svelte by federal standards—but, in addition to the usual requirements for an abstract, project narrative, budget and budget narrative, it also includes links to templates for a Financial Plan, Operational Plan, Actuarial Certification and—my personal favorite—the Executive Overview. The Financial Plan was a fiendishly complex Excel workbook, while the Operational Plan and Executive Overview were locked Word files.

Since the Word documents were locked, spell check and find/replace didn’t work in the text input boxes. Every change had to be made manually. Charmingly, the Operational Plan template had no place to insert the applicant’s name or contact info. So when the file is printed for review, which I’m sure it will be, and gets dropped on the floor with several other proposals, which is possible, there’ll be no way to tell which Operational Plan is which.

This could be a problem in an Operational Plan.

My vote for the most fabulously miss-titled form is the “Executive Overview.” Remember: a one-page abstract was also required, so an Executive Overview seemed redundant until I realized it was 13 single-spaced pages, with tons of inscrutable drop-down menus and fixed-length text input boxes. It seems that CMS is confused as to the meaning of “overview.”

The Executive Overview was really another project narrative, disguised as a form. If one double-spaced the Executive Overview, it would be about 26 pages long. Although the FOA nominally allowed a 50-page project narrative, the length of the project narrative was effectively much shorter because of convoluted instructions that required the project narrative file to include other documents. Our project narrative ended up at 35 double-spaced pages—not all that much longer than the so-called Executive Overview.

This FOA also included four “innovation categories” that were obtuse and mostly interchangeable. The FOA required that the selected innovation category be listed four times, once in the abstract, twice in the project narrative and again in the Overview. Since the categories were confusing at best, our client changed their selection a couple of times during the drafting process, which meant it had to be changed in four different places each time.

The grant request amount had the same problem, except that it is also included in the Financial Plan, budget narrative, cover letter and Actuarial Certification, as well as the abstract, project narrative, and Overview. So when the budget changed—which it inevitably did—each change had to occur in seven places to maintain internal consistency.

CMS, of course, never thought to link the various templates so that global changes could be made. But then again, why would they? After all, the authors of this FOA don’t write proposals and aren’t concerned with simplifying the process, which brings me back to the nine categories of turkey injury treatment. I wonder who keeps stats on turkey injuries. I would like to meet the GS-13 in charge of domestic fowl attacks at the Department of Agriculture.