Posted on 10 Comments

Studying Programs is Hard to Do: Why It’s Difficult to Write a Compelling Evaluation

Evaluation sections in proposals are both easy and hard to write, depending on your perspective, because of their estranged relationship with the real world. The problem boils down to this: it is fiendishly difficult and expensive to run evaluations that will genuinely demonstrate a program’s efficacy. Yet RFPs act as though the 5 – 20% most grant budgets usually reserved for evaluations should be sufficient to run a genuine evaluation process. Novice grant writers who understand statistics and the difficulties of teasing apart correlation and causation but also realize they need to tell a compelling story in order to have a chance at being funded are often stumped at this conundrum.

We’ve discussed the issue before. In Reading Difficult RFPs and Links for 3-23-08, we said:

* In a Giving Carnival post, we discussed why people give and firmly answered, “I don’t know.” Now the New York Times expends thousands of words in an entire issue devoted to giving and basically answers “we don’t know either.” An article on measuring outcomes is also worth reading, although the writer appeared not to have read our post on the inherent problems in evaluations.

That last link is to an entire post on one aspect of the problem. Now, The Chronicle of Higher Education reports (see a free link here) that the Department of Education has cancelled a study to track whether Upward Bound works.* A quote:

But the evaluation, which required grantees to recruit twice as many students to their program as normal and assign half of them to a control group, was unpopular from the start […] Critics, led by the Council for Opportunity in Education, a lobbying group for the federal TRIO programs for disadvantaged students, said it was unethical, even immoral, of the department to require programs to actively recruit students into programs and then deny them services.

“They are treating kids as widgets,” Arnold L. Mitchem, the council’s president, told The Chronicle last summer. “These are low-income, working-class children that have value, they’re not just numbers.”

He likened the study to the infamous Tuskegee syphilis experiments, in which the government withheld treatment from 399 black men in the late stages of syphilis so that scientists could study the ravages of the disease.

But Larry Oxendine, the former director of the TRIO programs who started the study, says he was simply trying to get the program focused on students it was created to serve. He conceived of the evaluation after a longitudinal study by Mathematica Policy Research Inc., a nonpartisan social-policy-research firm, found that most students who participated in Upward Bound were no more likely to attend college than students who did not. The only students who seemed to truly benefit from the program were those who had low expectations of attending college before they enrolled.

Notice, by the way, Mitchem’s ludicrous comparison of evaluating a program with the Tuskeegee experiment: one would divide a group into those who receive afterschool services that may or may not be effective with a control group that wouldn’t be able to receive services with equivalent funding levels anyway. The other cruelly denied basic medical care on the basis of race. The two examples are so different in magnitude and scope as to make him appear disingenuous.

Still, the point is that our friends at the Department of Education don’t have the guts or suction to make sure the program it’s spent billions of dollars on actually works. Yet RFPs constantly ask for information on how programs will be evaluated to ensure their effectiveness. The gold standard for doing this is to do exactly what the Department of Education wants: take a large group, randomly split it in two, give one services and one nothing, track both, and see if there’s a significance divergence between them. But doing so is incredibly expensive and difficult. These two factors lead to a distinction between what Isaac calls the “proposal world” and the “real world.”

In the proposal world, the grant writer states that data will be carefully tracked and maintained, participants followed long after the project ends, and continuous improvements made to ensure midcourse corrections in programs when necessary. You don’t necessarily need to say you’re going to have a control group, but you should be able to state the difference between process and outcome objectives, as Isaac writes about here. You should also say that you’re going to compare the group that receives services with the general population. If you’re going to provide the ever-popular afterschool program, you should say, for example, that you’ll compare the graduation rate of those who receive services with those who don’t, for example, as one of your outcome measures. This is a deceptive measure, however, because those who are cognizant enough to sign up for services probably also have other things going their way, which is sometimes known as the “opt-in problem:” those who are likely to present for services are likely to be those who need them the least. This, however, is the sort of problem you shouldn’t mention in your evaluation section because doing so will make you look bad, and the reviewers of applications aren’t likely to understand this issue anyway.

In the real world of grants implementation, evaluations, if they are done at all, usually bear little resemblance to the evaluation section of the proposal, leading to vague outcome analysis. Since agencies want to get funded again, it is rare that an evaluation study of grant-funded human services programs will say more less, “the money was wasted.” Rather, most real-world evaluations will say something like, “the program was a success, but we could sure use more money to maintain or expand it.” Hence, the reluctance of someone like Mr. Mitchem to see a rigorous evaluation of Upward Bound—better to keep funding the program with the assumption it probably doesn’t hurt kids and might actually help a few.

The funny thing about this evaluation hoopla is that even as one section of the government realizes the futility of its efforts to provide a real evaluation, another ramps up. The National Endowment for the Arts (NEA) is offering at least $250,000 for its Improving the Assessment of Student Learning in the Arts (warning: .pdf link) program. As subscribers learn, the program offers “[g]rants to collect and analyze information on current practices and trends in the assessment of K-12 student learning in the arts and to identify models that might be most effective in various learning environments.” Good luck: you’re going to run into the inherent problems of evaluations and the inherent problems of people like Mr. Mitchem. Between them, I doubt any effective evaluations will actually occur—which is the same thing that (doesn’t) happen in most grant programs.


* Upward Bound is one of several so-called “TRIO Programs” that seek to help low-income, minority and/or first generation students complete post-secondary education. It’s been around for about 30 years, and (shameless plug here) yes, Seliger + Associates has written a number of funded TRIO grants with stunningly complex evaluation sections.

Posted on 1 Comment

Stuck on Stupid: Hiring Lobbyists to Chase Earmarks

A faithful Grant Writing Confidential reader and fellow grant writer, Katherine, sent an email wanting my take on a public agency hiring a lobbying firm to seek federal earmarks. For those not familiar with the term, it means getting a member of Congress to slip a favored local project into a bill, bypassing normal reviews and restrictions. The Seattle Times recently ran a nice article on the subject featuring our own Representative Jim McDermott, who is skilled at the art of earmarks. The only member of Congress I know doesn’t push earmarks is John McCain. For the rest of Congress, earmarks are a way of funneling money into often dubious projects, such as the infamous Bridge to Nowhere.

Back to the local school district where Katherine lives, which decided to hire a DC lobbying firm for $60K/year to get earmarks. She suspects this is a scam. I have no idea whether this particular lobbying firm is up to no good, but in my experience hiring lobbyists to chase earmarks will make the lobbyists happy and lead to lots of free lunches and dinners for public officials visiting DC to “confer” with their lobbyist and legislators, though it is unlikely to end with funding.

A small anecdote will demonstrate this phenomenon. About 20 years ago, when I was Development Manager for the City of Inglewood,* I was directed by the mayor via the city manager to contract with a particular DC lobbying firm to chase earmarks. Since the city manager and I knew this was likely a fool’s errand, we agreed to provide a token contract of $15K. I accompanied the mayor and a few others to DC for the requisite consultation with the firm. About 10 in morning, we strolled from the Mayflower Hotel over to K street, where all the lobbyists hang out, and were ushered into a huge conference room with a 25 foot long table.

Over the next two hours or so, just about every member of the firm wandered in to opine on potential earmarks. Around 12:30, we all repaired to an expensive DC restaurant (are there any other kind?) for steaks and cocktails. We had a fine meal and I met then former Vice President Walter Mondale, who had morphed into a lobbyist himself and was taking his clients out for lunch. When I got back to Inglewood, I received an invoice from our lobbyist which exceeded the contract amount. Our contract paid for less than one meeting in DC and resulted in no earmarks. But I had a great time, since it is always fun to visit DC using somebody else’s money.

That experience schooled me on earmarks and about why Inglewood had gone about acquiring them in the wrong way. If a public agency wants to try for an earmark, the agency can do so just by contacting the chief field deputy for Senator Foghorn Leghorn. Congressional field deputies know all there is to know about the earmark process. If your representative is in a mood to support your project (e.g., needs help to get re-elected and wants to say they are standing up for schools), they will fall all over themselves directing their staff to push the earmark. If they don’t want to for some reason, all the lobbyists in the world won’t force the issue. In that situation, the school district might just as well use the money to buy lotto tickets in hopes of funding the project, rather than hiring a lobbyist. Furthermore, going through the congressional field office will avoid the EDGAR problems described below.

Another problem is that if you have almost all of the 535 members of Congress promoting various earmarks, the chances of your particular project being included are pretty slim. This is another reason we don’t recommend pursuing earmarks. If Katherine’s school district really wants to fund education projects, this is not the way to go about it. Instead, they should hire an experienced grant writing firm, like Seliger + Associates, to help them refine and prioritize project concepts, conduct grant source research, and start submitting high quality, technically correct proposals. If the concepts have merit, they will eventually be funded. The Department of Education and others provide billions of dollars in actual grant funds every year. This is a larger, more reliable source of funding than earmarks.

Finally, if an organization is lobbying, it can end up closing off grant funds. The “Education Department General Administrative Regulations” (EDGARs) govern grants and contracts made through the Department of Education, and they’re designed to prevent corruption, kickbacks, and the like. Subpart F, Appendix A, deals with lobbying. It says:

The undersigned certifies, to the best of his or her knowledge and belief, that:
(1) No Federal appropriated funds have been paid or will be paid, by or on behalf of the undersigned, to any person for influencing or attempting to influence an officer or employee of an agency, a Member of Congress, an officer or employee of Congress, or an employee of a Member of Congress in connection with the awarding of any Federal contract, the making of any Federal grant, the making of any Federal loan, the entering into of any cooperative agreement, and the extension, continuation, renewal, amendment, or modification of any Federal contract, grant, loan, or cooperative agreement.

And so on, which you can read if you’re a masochist. EDGAR basically means that an agency which pursues lobbying can end up screwing itself out of the much larger and more lucrative grant world.

Katherine has also found questionable math regarding the particular lobbyists’ probable efficiency, and the lobbyist also makes the dubious claim that it has a “90% success rate.” But what does “success” mean in this context? Does that mean 90% of clients get some money? If so, how much? And from who? And through which means? Seliger + Associates doesn’t keep “success” numbers for reasons explained in our FAQ. We constantly see grant writers touting their supposed success rate and know that whatever numbers they pitch are specious at best for the reasons described in the preceding link.

Public agencies hiring lobbyists for earmarks is often a case of being stuck on stupid.


* “Inglewood always up to no good,” as 2Pac and Dr. Dre say in California Love.

Posted on Leave a comment

FEMA Tardiness, Grants.gov, and Dealing with Recalcitrant Bureaucrats

The Federal Emergency Management Agency (FEMA)—the same guys who brought us the stellar job after Hurricane Katrina—issued the Assistance to Firefighters Grants program on what Grants.gov says is March 26, 2008. But the deadline was April 04, 2008, which is absurdly short by any standards, let alone those of a federal agency. I sent an e-mail to the contact person, Tom Harrington, asking if there was a typo. He responded: “No mistake. The Grants.gov posting was a little delayed. The application period for AFG actually started on March 3rd.” So, unless you have psychic powers, it is unlikely that you would have known about this opportunity. This “little delay” is for a program with $500,000,000 of funding. If anything has changed at FEMA since Katrina, it’s not obvious from my encounter with the organization; as President Bush said to FEMA’s chief after Katrina, “Brownie, you’re doing a heck of a job!

I was curious about how and why this deadline foul-up occurred, leading to an e-mail exchange Tom, who appears to be a master at not knowing about the programs he is a contact person for. It’s instructive to contrast my experience with him and the one with the state officials who I wrote about in Finding and Using Phantom Data. Bureaucrats come in a variety of forms, some helpful, like the ones who provided dental data to the best of their ability, and some not, such as Tom.

I replied to his e-mail and said, “Do you know who was responsible for the delay, or can you find that out? Three weeks is more than a ‘little’ delayed.” He gave me a wonderfully bureaucratic response: “I don’t know if there is anyone specific to blame; the process is to blame.” That’s rather curious, since processes don’t put grant opportunities on Grants.gov—people do, assuming that federal bureaucrats should be considered people. And even if the “process” is to blame, someone specific should to change the process so problems like this one don’t recur. I replied: “If ‘the process is to blame,’ what will you do differently next year to make sure this doesn’t happen again?”

His response contradicted his earlier statement: “Assure that those responsible for the paperwork are informed that they are responsible for the paperwork.” In other words, someone is responsible for this year’s problem—but who is that person? I inquired: “I’m wondering who is responsible for the paperwork or who will be responsible for it.” And Tom responded: “As soon as the policy is written, we’ll know. At this time, there is no policy.” Notice how he didn’t answer my first question: who is responsible? Instead, he used two clever constructions, by saying that “we’ll know,” rather than him or some specific person with FEMA. Instead, some nebulous “we,” with no particular individual attached to the group will know. The passive construction “there is no policy,” avoid specifying a responsible person. Tom uses language to cloak the identity of whoever might be in charge of the FEMA policy regarding Grants.gov. The e-mail exchange went for another fruitless round before I gave up.*

If you were actually interested in finding the truth about who caused the delay, or how it will be avoided next year, you’d have to try and find out who is really in charge of the program, contact that person, and probably continue up the food chain when that person gives you answers similar to Tom’s. Normal people, however, are unlikely to ever try this, which is why Tom’s blame of “the process” is so ingenious: he avoids giving any potential target. Someone caused this problem, and to find out who would probably take an enterprising journalist or an academic highly interested in the issue.**

Sadly, I’m not going to be that person, as I write this chiefly to show a) how bureaucracies work, which isn’t always in the positive way I described in “Finding and Using Phantom Data”, and b) why you should be cognizant of the potential drawbacks of Grants.gov. Regarding the former, if you need to find information from reluctant bureaucrats, you have to be prepared to keep trying to pin them down or become enough of a pest that they or their bosses would rather get rid of you by complying with your request than by stonewalling. If the bureaucrats at the health department from “Finding and Using Phantom Data” had been as unhelpful as Tom, I would’ve begun this process because data is more important than the deadline for this program.

This strange interlude in the Never Never Land of FEMA also tells us something important about Grants.gov: the primary website for notifying interested parties about government grants is as useful as the organizations who use it. Any fire department that depends on Grants.gov for announcements just got screwed. Despite the designation of Grants.gov as “a central storehouse for information on over 1,000 grant programs[…]”, it’s only as good as the independent organizations using it. Perhaps not surprisingly, given FEMA’s past performance, that organization doesn’t appear interested in timeliness. Incidents like this explain Isaac’s wariness and skepticism toward Grants.gov, and why it, like so many government efforts, tends not to live up to its purpose. And when something goes wrong, whether it be FEMA during Hurricane Katrina or propagating information about grant programs, don’t be surprised if “the process is to blame.”

EDIT: You can see our follow-ups to this post in “FEMA Fails to Learn New Tricks With the Assistance to Firefighters Grant Program” and “FEMA and Grants.gov Together at Last.”


* Tolstoy wrote in the appendix to War and Peace (trans. Richard Pevear and Larissa Volokhonsky):

In studying an epoch so tragic, so rich in the enormity of its events, and so near to us, of which such a variety of traditions still live, I arrived at the obviousness of the fact that the causes of the historical events that take place are inaccessible to our intelligence. To say […] that the causes of the events of the year twelve are the conquering spirit of Napoleon and the patriotic firmness of the emperor Alexander Pavlovich, is as meaningless to say that the causes of the fall of the Roman Empire are that such-and-such barbarian led his people to the west […] or that an immense mountain that was being leveled came down because the last workman drove his spade into it.

You could say the same of trying to study the manifold tentacles and networks of the federal government, which is only moved, and then only to a limited extent, during a truly monumental and astonishing screw-up like Katrina, which is itself only a manifestation of problems that extend far backwards in time and relate to culture, incentives, and structure, but such failures are only noticed by the body politic at large during disasters.

** Even journalists get tired of fighting the gelatinous blob. As Clive Crook writes in The Atlantic, “Personally, and I speak admittedly as a resident of the District of Columbia, I find the encompassing multi-jurisdictional tyranny of inspectors, officers, auditors, and issuers of licenses—petty bureaucracy in all its teeming proliferation—more oppressive in the United States than in Britain, something I never expected to say.”

Posted on 2 Comments

Rock Chalk, Jayhawk, KU! — Lessons from Basketball for Grant Writers

My daughter will graduate from the William Allen White School of Journalism & Mass Communications at the University of Kansas in May. Although I’m not much of a sports fan, over the past four years, I’ve learned to love Jayhawks basketball and was delighted to see the Jayhwawks come back from a double digit deficit last year to defeat Memphis at the storied Allen Fieldhouse, one of America’s true basketball shrines.

Last night, I watched with over 100 crazed KU alumni at a Seattle sports bar as “Super Mario” Chalmers sank a wild 3 pointer at the end of regulation play to send the Jayhawks into overtime against the Memphis Tigers in the NCAA championship game. This was an improbable feat, since KU was down 60-51 with 2:12 to go. KU went on to win, bringing the championship back to KU for the third time and sending thousands of delirious fans to celebrate on Massachusetts Street in Downtown Lawrence.

KU’s spectacular comeback and victory got me to thinking about how basketball relates to grant writing. Perhaps the most frequent questions I am asked are along the lines of, “What are my chances of getting a grant?” and “Who gets funded?” To the first, I invariably answer, “I’m a grant writer, not a fortune teller. Contact Miss Cleo.” My answer to the second is more complex, as it goes to the bedrock of people’s souls. Nonprofits are only as successful as their founders, board members and staff. So asking who gets funded is like asking which high school student will get into an Ivy League school, which college grad will get into medical school and which team will win a close basketball game. The answer, of course, is that whoever wants whatever “it” is most, will likely achieve it, while the rest will be left to wonder why.

Now, to get ready to win a NCAA Basketball championship, it helps to have a squad of McDonald’s All Americans and a great coach, just as having a chance to get into an Ivy or med school requires a 4.0 GPA and ultra high test scores. Similarly, merely “wanting” a grant is necessary but not sufficient for nonprofits. The organization must have a 501(c)3 Letter of Determination from the IRS, a clear charitable purpose that meets an identified need, be believable at delivering the service, have the necessary management and administrative infrastructure to manage the grant, be capable of identifying appropriate funding sources, and, of course, be able to write compelling and technically correct proposals that are submitted on time. In other words, want is just the start of the process: tenacity combined with an ability to understand what makes a compelling proposal and organization are also necessary to get proposals funded. As Randy Pausch said in his last lecture (available via YouTube or a Wall Street Journal article), “Brick walls are there for a reason. They let us prove how badly we want things.” He was talking about scientific achievement and life goals, but he could just as easily been talking about grants or basketball.

At just over two minutes to go last night, tens of thousands of KU faithful and I, felt another NCAA tournament slipping away. It seemed like Memphis wanted the title more than Kansas, but during the next seven minutes, including overtime, the Jayhawks stormed back and snatched the championship from Memphis. A scriptwriter could not have framed the turnaround better. During the four years I have followed the Jayhawks, they have always fielded great times, but lost before getting to the Final Four. They never gave up believing in their dream, and last night simply wanted the championship more than Memphis. If you believe in your organization and the services it provides, and the need is clear, keep trying. Eventually, like the Jayhawks, you will succeed. In the words of the wonderfully named Tub Thumping, by a one hit wonder band named Chumbawamba, “I get knocked down / But I get up again, / You’re never going to keep me down.”

So, when the Department of Education, SAMSHA or the Dubuque Community Foundation summarily rejects your proposal, get up, dust yourself off, polish the proposal and find another funder. Like the Jayhawks, it may take some time. Like the Jayhawks, you’ll have to work hard if you want to win—almost no one last night saw the countless hours each player spends in the gym. In keeping with the rock theme, and to quote the Rolling Stones, “You can’t always get what you want, but if you try sometimes well you just might find, you get what you need.” The Jayhawks wanted and needed a NCAA championship and got it. If you want funding for your organization, keep trying, develop a compelling game plan, and you may just get the funding you need.

In the meantime, chant Rock Chalk, Jayhawk, KU five times before you submit every proposal and it will be just like being in Allen Fieldhouse before a Mizzou game.

Posted on 8 Comments

Finding and Using Phantom Data in the Service Expansion in Mental Health/Substance Services, Oral Health and Comprehensive Pharmacy Services Under the Health Center Program

RFP needs assessments will sometimes request data that aren’t readily available or just don’t exist. The question then becomes for you, the grant writer, what to do when caught between an RFP’s instructions and the reality of phantom data. When you can’t find it, you’ll have to get creative.

The Service Expansion in Mental Health/Substance Services, Oral Health and Comprehensive Pharmacy Services Under the Health Center Program (see the RFP in a Word file here) presents a good example of this problem. The narrative section for “B. Oral Health Review Criteria” begins on page 44. Under “Review Criterion 1: Need,” “Subsection 2,” the RFP says “Applicant clearly describes the target population for the proposed oral health service, including […]” which goes to “Subsection c,” which says, “The oral health status and treatment needs of the target population (e.g., caries rate, edentulism, periodontal disease, fluoridation in community water, oral cancer).” Such data are not tracked nationally. To the extent anyone keeps data, they do on a county-by-county or state-by-state basis, which can make finding the data hard—particularly for a service area that may not match up with a county or other jurisdictional boundary. But I had to answer the RFP and so looked for information about oral health status online but could find little if anything through state, county, or city websites.

If you can’t find important data online, your next step is to contact whichever public officials might be able to have it. The organization we worked for was in a state with dental health responsibility rolled into the Department of Health. Contact information was listed on the Department of Health’s website. So I called and e-mailed both the person at the state office and the person responsible for the organization’s county. Neither answered. I skipped the rest of 1.2.c until one of the state representatives replied—promptly, too!—with a Word file containing what data they had. While the information helped, and I cited what they offered, the statistics didn’t answer all of the named examples of the health status and treatment needs. I still had an unfilled data gap.

This left two choices: say the data isn’t available or write in generalities about the problems, extrapolating from what data are available. If you say the data isn’t there, you might score lower for the section compared to the people who have data or write in generalities. If you obfuscate and explain, there’s a chance you’ll receive some points. Therefore, the latter is almost always the better choice: this can be done by discussing what data you have in generalities, telling anecdotes, appealing to organization experience, and alluding to known local health problems that don’t have specific studies backing them up. This usually means saying something to the effect of, “While specific data are not available for the target area, it can be assumed that…” and then continuing. I used a combination of these strategies by citing what data I had from the state and filling the gaps with generalities.

When you find requests for data, do everything you can to seek it, and don’t be afraid to contact public officials. If you still can’t find the data, summon whatever construct as artful an explanation as you can and then move on. Chances are that if the RFP wants data so unusual that you can’t find it after a concerted effort, many other people won’t be able to find it either. You should also remember The Perils of Perfectionism: every hour you spend searching for data is an hour you’re not spending on other parts of the proposal. You shouldn’t invest hours and hours of time in finding trivial data, and after you’ve made a reasonably strong effort to search the Internet and contact whoever you can, stop and move on. This is especially important because you might be searching for data that simply does not exist, in which case it will never be found and you’re wasting time trying to find it. While it is fun to search for the last unicorn, you are most likely to find a horse with a cardboard horn strapped to her forehead than a mythical and elusive creature.