Another piece of the evaluation puzzle: Why do experiments make people unhappy?

The more time you spend around grants, grant writing, nonprofits, public agencies, and funders, the more apparent it becomes that the “evaluation” section of most proposals is only barely separate in genre from mythology and folktales, yet most grant RFPs include requests for evaluations that are, if not outright bogus, then at least improbable—they’re not going to happen in the real world. We’ve written quite a bit on this subject, for two reasons: one is my own intellectual curiosity, but the second is for clients who worry that funders want a real-deal, full-on, intellectually and epistemologically rigorous evaluation (hint: they don’t).

That’s the wind-up to “Why Do Experiments Make People Uneasy?“, Alex Tabarrok’s post on a paper about how “Meyer et al. show in a series of 16 tests that unease with experiments is replicable and general.” Tabarrok calls the paper “important and sad,” and I agree, but the paper also reveals an important (and previously implicit) point about evaluation proposal sections for nonprofit and public agencies: funders don’t care about real evaluations because a real evaluation will probably make the applicant, the funder, and the general public uneasy. Not only do they make people uneasy, but most people don’t even understand how a real evaluation works in a human-services organization, how to collect data, what a randomized controlled trial is, and so on.

There’s an analogous situation in medicine; I’ve spent a lot of time around doctors who are friends, and I’d love to tell some specific stories,* but I’ll say that while everyone is nominally in favor of “evidence-based medicine” as an abstract idea, most of those who superficially favor it don’t really understand what it means, how to do it, or how to make major changes based on evidence. It’s often an empty buzzword, like “best practices” or “patient-centered care.”

In many nonprofit and public agencies, evaluations and effectiveness are the same: everyone putatively believes in them, but almost no one understands them or wants real evaluations conducted. Plus, beyond that epistemic problem, even if evaluations are effective in a given circumstance (they’re usually not), they don’t necessarily transfer. If you’re curious about why, Experimental Conversations: Perspectives on Randomized Trials in Development Economics is a good place to start—and this is the book least likely to be read, out of all the books I’ve ever recommended here. Normal people like reading 50 Shades of Grey and The Name of the Rose, not Experimental Conversations.

In the meantime, some funders have gotten word about RCTs. For example, the Department of Justice’s (DOJ) Bureau of Justice Assistance’s (BJA) Second Chance Act RFPs have bonus points in them for RCTs. I’ll be astounded if more than a handful of applicants even attempt a real RCT—for one thing, there’s not enough money available to conduct a rigorous RCT, which typically requires paying the control group to follow up for long-term tracking. Whoever put the RCT in this RFP probably wasn’t thinking about that real-world issue.

It’s easy to imagine a world in which donors and funders demand real, true, and rigorous evaluations. But they don’t. Donors mostly want to feel warm fuzzies and the status that comes from being fawned over—and I approve those things too, by the way, as they make the world go round. Government funders mostly want to make congress feel good, while cultivating an aura of sanctity and kindness. The number of funders who will make nonprofit funding contingent on true evaluations is small, and the number willing to pay for true evaluations is smaller still. And that’s why we get the system we get. The mistake some nonprofits make is thinking that the evaluation sections of proposals are for real. They’re not. They’re almost pure proposal world.


* The stories are juicy and also not flattering to some of the residency and department heads involved.

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>