Tag Archives: RFPs

Further Information Regarding the Department of Redundancy Department

Last week I discussed repetitive RFP questions and where they spring from, and this week, in honor of the RFPs themselves, I’ll go over the issue from the angle of SAMHSA‘s “Targeted Capacity Expansion Program for Substance Abuse Treatment and HIV/AIDS Services (Short Title: TCE/HIV)RFP (warning: .pdf link). It’s a model of modern inanity and also rich in the oddities that can make grant writing difficult or rewarding. The narrative allows 30 single-spaced pages to answer six pages of questions, and the RFP keeps reiterating the focus on client outreach and pretreatement services. These concepts are pounded in over and over again. Nonetheless, “Section C: Proposed Implementation Approach” asks in its first bullet, on page 25:

Describe the substance abuse treatment and/or outreach/pretreatment services to be expanded or enhanced, in conjunction with HIV/AIDS services, and how they will be implemented.

I then describe how this will be accomplished in great, scrupulous detail, including the outreach to be used and why it will be effective. Nonetheless, the penultimate bullet says on page 26:

Provide a detailed description of the methods and approaches that will be used to reach the specified target population(s) of high risk substance abusers, their sex partners, and substance abusing people living with AIDS who are not currently enrolled in a formal substance abuse treatment program. Demonstrate how outreach and pretreatment projects will make successful referrals to substance abuse treatment.

This is part of the substance abuse and/or outreach/pretreatment service to be expanded, and, as such, it has already been answered. This repetition seems to be a symptom of using last year’s RFP to build this one, as the last two bullets are new, and I’m willing to bet that whoever wrote this RFP didn’t realize that the question had already been implicitly asked in the first bullet. Regardless, if they wanted to explicitly ask this question, the RFP writer should’ve incorporated it into the first bullet instead of making the applicant refer back to the first bullet while also reiterating what the first bullet said. Isaac warned you not to submit an exact copy of a proposal you submitted the year before without making sure that it conforms to this year’s guidelines. If only RFP writers would give us the same courtesy.

Nonetheless, they often don’t, which leads to repeated questions and ideas. This example isn’t as egregious as some, but it is still bad enough to merit a post—and advice on what to do.

The best way of dealing with a problem like this is to note that you’ve already answered the question, but you should be sure to name where you answered it; for example, one might say that the second question had already been answered in Section D, as part of the second bullet point. This gives specific directions to the exact place where the question has already been answered and avoids having to repeat the same thing verbatim. If the proposal had no page limits, one could write “as previously noted in Section C…” and then copy, paste, and rewrite it slightly to prevent the reader from falling into a coma. Granted, such rewrites might cause the writer to fall into a coma, but I’m not sure this would negatively affect quality.

You should be aware that this odd quality of RFPs asking repetitive questions is distressing in its ubiquity. It happened in the Service Expansion in Mental Health/Substance Services, Oral Health and Comprehensive Pharmacy Services application and in numerous other RFPs. Don’t fear those questions and try not to become overly frustrated by them. Just don’t ignore them. No matter how seemingly asinine a question in an RFP is, you must answer it anyway. In an older post I mentioned two forms of the golden rule:

The golden rule cliche says, “Do unto others as you would have them do unto you.” The almost-as-old, snarky version goes, “He who has the gold makes the rules.” If you want to make the rules about who gets funded, you have to lead a federal agency or start a software company, make more money than some countries’ GDP, and endow a foundation.

You want the gold and therefore have to follow the rules of those who distribute the filthy lucre. So answer the repetitive questions, no matter how silly it is. When you’ve written enough proposals, you’ll realize that RFP writers make mistakes like the one listed all the time. Your job, as the grant writer, is to work around those mistakes, even when an RFP asks the exact same question. In one finally bout of silliness, the TCE/HIV RFP confidentiality section on page 29 asks:

Describe the target population and explain why you are including or excluding certain subgroups. Explain how and who will recruit and select participants.

Compare this to Section A, “Statement of Need,” and the first bullet point, which begins: “Describe the target population […]” Why they need to know the target population twice is a fine question. Or, I could say, explain why they need to know the target population again. There, I’ve just mirrored the problem by asking the same question twice, so I guess it’s time for me to apply for a job as a RFP writer for the Department of Education.

Yet there’s one other structural problem bothers me: page five tells the applicant the groups that must be targeted. These groups are so broad that they encompass an enormous swath of the population, which would be a fine subject for another post, but the question in the confidentiality section comes after SAMHSA tells us who we must serve, then asks us who will be served and why, even though the RFP has already asked and SAMHSA has already dictated who will be served. I’m guessing applicants are likely to swear they’ll only serve eligible populations because they want the money, even though they can’t say that. Applicants who want the money answer earnestly. Too bad RFP writers don’t have to respond to the drivel they all too often emit, as there might be fewer outright bad RFPs issued.

Studying Programs is Hard to Do: Why It’s Difficult to Write a Compelling Evaluation

Evaluation sections in proposals are both easy and hard to write, depending on your perspective, because of their estranged relationship with the real world. The problem boils down to this: it is fiendishly difficult and expensive to run evaluations that will genuinely demonstrate a program’s efficacy. Yet RFPs act as though the 5 – 20% most grant budgets usually reserved for evaluations should be sufficient to run a genuine evaluation process. Novice grant writers who understand statistics and the difficulties of teasing apart correlation and causation but also realize they need to tell a compelling story in order to have a chance at being funded are often stumped at this conundrum.

We’ve discussed the issue before. In Reading Difficult RFPs and Links for 3-23-08, we said:

* In a Giving Carnival post, we discussed why people give and firmly answered, “I don’t know.” Now the New York Times expends thousands of words in an entire issue devoted to giving and basically answers “we don’t know either.” An article on measuring outcomes is also worth reading, although the writer appeared not to have read our post on the inherent problems in evaluations.

That last link is to an entire post on one aspect of the problem. Now, The Chronicle of Higher Education reports (see a free link here) that the Department of Education has cancelled a study to track whether Upward Bound works.* A quote:

But the evaluation, which required grantees to recruit twice as many students to their program as normal and assign half of them to a control group, was unpopular from the start […] Critics, led by the Council for Opportunity in Education, a lobbying group for the federal TRIO programs for disadvantaged students, said it was unethical, even immoral, of the department to require programs to actively recruit students into programs and then deny them services.

“They are treating kids as widgets,” Arnold L. Mitchem, the council’s president, told The Chronicle last summer. “These are low-income, working-class children that have value, they’re not just numbers.”

He likened the study to the infamous Tuskegee syphilis experiments, in which the government withheld treatment from 399 black men in the late stages of syphilis so that scientists could study the ravages of the disease.

But Larry Oxendine, the former director of the TRIO programs who started the study, says he was simply trying to get the program focused on students it was created to serve. He conceived of the evaluation after a longitudinal study by Mathematica Policy Research Inc., a nonpartisan social-policy-research firm, found that most students who participated in Upward Bound were no more likely to attend college than students who did not. The only students who seemed to truly benefit from the program were those who had low expectations of attending college before they enrolled.

Notice, by the way, Mitchem’s ludicrous comparison of evaluating a program with the Tuskeegee experiment: one would divide a group into those who receive afterschool services that may or may not be effective with a control group that wouldn’t be able to receive services with equivalent funding levels anyway. The other cruelly denied basic medical care on the basis of race. The two examples are so different in magnitude and scope as to make him appear disingenuous.

Still, the point is that our friends at the Department of Education don’t have the guts or suction to make sure the program it’s spent billions of dollars on actually works. Yet RFPs constantly ask for information on how programs will be evaluated to ensure their effectiveness. The gold standard for doing this is to do exactly what the Department of Education wants: take a large group, randomly split it in two, give one services and one nothing, track both, and see if there’s a significance divergence between them. But doing so is incredibly expensive and difficult. These two factors lead to a distinction between what Isaac calls the “proposal world” and the “real world.”

In the proposal world, the grant writer states that data will be carefully tracked and maintained, participants followed long after the project ends, and continuous improvements made to ensure midcourse corrections in programs when necessary. You don’t necessarily need to say you’re going to have a control group, but you should be able to state the difference between process and outcome objectives, as Isaac writes about here. You should also say that you’re going to compare the group that receives services with the general population. If you’re going to provide the ever-popular afterschool program, you should say, for example, that you’ll compare the graduation rate of those who receive services with those who don’t, for example, as one of your outcome measures. This is a deceptive measure, however, because those who are cognizant enough to sign up for services probably also have other things going their way, which is sometimes known as the “opt-in problem:” those who are likely to present for services are likely to be those who need them the least. This, however, is the sort of problem you shouldn’t mention in your evaluation section because doing so will make you look bad, and the reviewers of applications aren’t likely to understand this issue anyway.

In the real world of grants implementation, evaluations, if they are done at all, usually bear little resemblance to the evaluation section of the proposal, leading to vague outcome analysis. Since agencies want to get funded again, it is rare that an evaluation study of grant-funded human services programs will say more less, “the money was wasted.” Rather, most real-world evaluations will say something like, “the program was a success, but we could sure use more money to maintain or expand it.” Hence, the reluctance of someone like Mr. Mitchem to see a rigorous evaluation of Upward Bound—better to keep funding the program with the assumption it probably doesn’t hurt kids and might actually help a few.

The funny thing about this evaluation hoopla is that even as one section of the government realizes the futility of its efforts to provide a real evaluation, another ramps up. The National Endowment for the Arts (NEA) is offering at least $250,000 for its Improving the Assessment of Student Learning in the Arts (warning: .pdf link) program. As subscribers learn, the program offers “[g]rants to collect and analyze information on current practices and trends in the assessment of K-12 student learning in the arts and to identify models that might be most effective in various learning environments.” Good luck: you’re going to run into the inherent problems of evaluations and the inherent problems of people like Mr. Mitchem. Between them, I doubt any effective evaluations will actually occur—which is the same thing that (doesn’t) happen in most grant programs.


* Upward Bound is one of several so-called “TRIO Programs” that seek to help low-income, minority and/or first generation students complete post-secondary education. It’s been around for about 30 years, and (shameless plug here) yes, Seliger + Associates has written a number of funded TRIO grants with stunningly complex evaluation sections.

The Danger Zone: Common RFP Traps

When first looking at a RFP, it is a good idea to remember Robbie the Robot from Lost in Space (the 60’s TV show, not the terrible movie remake) shouting “Danger Will Robinson,”* because when you open a RFP, you’re entering THE DANGER ZONE.

Those innocent looking RFPs are filled with traps. For example, if you are responding to a RFP that was previously issued and you have a proposal from a past submission, you will typically find that the funder has changed the RFP slightly, often in a subtle way. This might be by changing the order of questions, using different headers or outline patterns, requiring a specific font, and the like. Since such changes usually are not substantive, I assume that this is done to trap novice or lazy applicants who just copy the previous proposal and change the date. It may be that program officers are basically bored and have nothing better to do, so they find cheap thrills in this, like rabbits racing across the road in front of a car. So, even if you submitted the same project concept for the same program last year, make sure that you carefully go through the RFP to find these public sector equivalents of “easter eggs”. It’s also a good idea, of course, to update the data, polish your text, and find all those typos that slipped through the last editing process.

Another RFP trap is repetitive questions. It is not unusual to find the same question, more or less, asked several times. Whether this is an intentional trick or an artifact of committee members writing different RFP sections, they can be a real challenge for the grant writer, particularly if there are page limits. So, what to do? If there is room, simply rewrite the first answer over and over again. I know this results in a pretty boring read, but occasionally, such as with some HUD programs, reviewers may only read particular sections. The alternative, which we use when there are space limitations, is to refer back to the original answer (e.g., As noted above in Criterion 1, Section 6.a and Criterion 2, Section 2.c, Citizens for a Better Dubuque has extensive existing referral relationships with the full range of youth providers, which will be utilized to provide project participants with service beyond the project scope. Wow, what a great proposal sentence! Feel free to steal it.). However you handle the problem, never ignore questions, as this practice runs the risk of missing points or having the proposal declared technically deficient and not scored at all.

Sometimes, the RFP asks lots of obtuse questions, but never specifically explicitly asks what you plan to do or how you plan to do it. I know this seems incredible, but the Department of Education, for example, often has RFPs like this. In this case, pick any spot you like and insert the project description (e.g., Within the above context of how the Dubuque After School Enrichment Initiative is articulated with Iowa learning standards, the following describes how academic enrichment services will be delivered:). No, this is not a smiley face, just a colon followed by a closed parenthesis. If I was going to use an emoticon, it would have a frowney face to evoke reading RFPs.

One of my favorite RFP traps is to find different instructions for ordering responses in different parts of the RFP. For example, there may be a series of outlined questions, followed by a series of criteria that ask the same questions, more or less, but in a different order. Since, unlike Schrodinger’s cat, the proposal can only have one “state,” the grant writer has to pick one to follow. Before plunging into the writing, it’s not a bad idea to contact the program officer to raise this conundrum. Unfortunately, even if you are able to find the program officer, your question will usually be met with either giggles or a cold, “read the RFP, it’s all there.” In either case, you’re back to having to pick one of the two orders.

Finally be afraid, be very afraid of RFPs for newly minted programs. This is because the writers of RFPs for new programs usually have no idea what they want from applicants. We’ve been working, for example, on a $8 million proposal being submitted to a California state agency on behalf of a public sector client. The program is new and the RFP is a mess in terms of conflicting guidance, hidden requirements and so on. Since there were some aspects of the RFP that were beyond even our amazing deductive abilities, after leaving several messages over a week, we finally got the program officer on the phone. He sheepishly admitted that they had “forgotten” to include some of the instructions but planned to see what they got in responses and fix the RFP next year. It was good to find an honest man in Sacramento, and we put the submission package together in the most logical manner we could. Hopefully the state agency will straighten out the RFP next year.

We have seen our approach used in subsequent RFPs before, so this is not impossible. We wrote the first HUD YouthBuild proposal funded in Southern California in response to the first funding round in 1993. Not surprisingly, the Notice of Funding Availability (NOFA: HUD-speak for RFP), was a complete nightmare and we had to develop a response format more or less on our own, after a number of unproductive calls to HUD. Fortunately, when the next Youthbuild NOFA was issued, it bore a remarkable resemblance to our submission in terms of how the proposals were to be organized. It is always fun to drag the bureaucracy toward enlightenment, so matter how hard the slog. YouthBuild moved to the Department of Labor in FY 2007, and, yes we successfully made the transition by writing yet another funded YouthBuild proposal last year, bringing our total of funded YouthBuild proposals to a baker’s dozen or so, proving that the funding agency is largely irrelevant to the grant writing process.


* Robbie actually made his screen debut in the wonderful 1956 film Forbidden Planet.

RFP Absurdity and Responding to Narrative Questions

I’ve written about stylistically bad language from government RFPs, but more common than the outright bad is the silly, the coy, the euphemistic, and the ridiculous. Now comes a fine example: section 1.d. on page 30 of the California 21st Century Community Learning Centers (CCLC) – Elementary & Middle Schools narrative:

Explain how all organizations involved in your collaborative have experience or the promise of success in providing educational and related activities that will complement and enhance the academic performance, achievement, and positive youth development of students.

So you need either (1) experience or (2) the “promise of success.” In other words, your level of experience is irrelevant because you can have a lot or none. The RFP* could’ve just asked, “Are the organizations involved able to provide educational services and, if so, how?” RFPs, however, seldom use 13 easy-to-understand words when 36 words designed to obfuscate meaning are available.

The requirement quoted above is particularly egregious because it has only one answer. Is any applicant going to claim that their organizations don’t have the promise of success? Of course not! And what does “the promise of success” mean? To my mind, the answer is “nothing.” Orwell would be aghast at this and many other RFPs—in “Politics and the English Language” he finds examples where “The writer either has a meaning and cannot express it, or he inadvertently says something else, or he is almost indifferent as to whether his words mean anything or not.” I’ve not read a better concise description of RFPs.

Still, you’re writing a proposal and thus your output can and perhaps even should reflect the document that guides your input. Unlike most forms of writing, where brevity is beautiful, (Write Right**: “If I were limited to one rule of style, Omit Unnecessary Words would be the hands down winner”) grant applications encourage bad writing because you (a) need to fill space and (b) need to answer obfuscated questions fully and completely. The best way to do so is by parroting back variations on what the application writer expects, and the best way to avoid irritating a reviewer is by filling your proposal with muck and jargon.

This peculiar kind of poor writing is similar to the peculiar kind of speciousness Isaac discussed in Writing Needs Assessments: How to Make It Seem Like the End of the World. You write a narrative by sending back what you get in the RFP, and when you get garbage in, you usually reflect garbage out. Most RFPs are merely asking you variations on who, what, where, when, why, and how, while most proposals are merely variations on the answers to those questions. Remember that when you’re writing and consider which aspect you should be addressing in the response to each RFP question. The apparently difficult sentence I quoted above from the 21st CCLC can be simplified further to “Who’s going to carry out the program?” There. Nothing to fear. Novice grant writers are often intimidated by the jargon in RFPs, but that’s often just an artifact of bad writing rather than an indication of actual difficulty.

In Studio Executives, Starlets, and Funding, I wrote “Sometimes the funder will want agencies with long track records, sometimes new agencies.” Now I can say that sometimes funders want both, as long as you can somehow justify your experience or the virtue of not having any experience in a proposal. If you come across a narrative demand like the one above, play the RFP’s game. It’s the only way to win.


* Before I get irate e-mails from eagle-eyed readers, I’ll note that the 21st CCLC is a Request For Applications (RFA), but I just call them all RFPs for simplicity’s sake.** If I had to recommend just one book to aspiring writers, regardless of the kind of writing, it would be this one. It’s short, pithy, accurate, and will do more to improve most writers in less time than virtually any other book I know. If I had to recommend two, the second would be William Zinsser’s On Writing Well.

Phoenix Programs

I noted earlier in Zombie Funding that programs can dwindle from a huge amount of available money to virtually nothing, but they can also rise from the ashes like a Phoenix. Isaac also commented on this phenomenon in Zombie Funding – Six Tana Leaves for Life, Nine for Motion.

Now I’ve seen a more recent example of the monster: last year the School-Based Student Drug-Testing Programs had $1,680,000 available for 12 awards; this year it’s got $12,750,000 for 85 awards, as the link demonstrates. I’m not sure why the program got an extra $11 million, but it leads to another important but perhaps not obvious point: rejection for an application one year doesn’t necessarily mean you shouldn’t apply the next, as changes in the program might make you more likely to be funded. The vast amount of extra money allocated to the School-Based Student Drug-Testing Programs could be a reaction to a large number of highly qualified applicants.

Or it could be random, but the additional money available still makes the School-Based Student Drug-Testing Programs more attractive to anyone who applied or thought about applying previously.