Posted on 1 Comment

More on Charities

A previous post linked to a Wall Street Journal post on charities; now the paper released a full article (may not be accessible to non-subscribers) on the subject of how donors evaluate the usefulness of a program, arguing that donors are becoming more engaged in measurement. One thing missing: statistics showing this is actually part of a trend, rather than just a collection of anecdotes. The article is more descriptive of the practices around how to evaluate effectiveness and uses hedge words:

Wealthy people and foundations sometimes hire philanthropy consultants to help them gauge a charity’s effectiveness. But other donors who seek that kind of analysis usually have had to rely on guesswork or do it themselves, which makes it tough to figure out whether one approach to solving a problem is better than another.

“Sometimes” they hire consultants, other times they essentially use the hope and pray method. That’s not terribly different from how things have always been done. Most interesting, however, is a topic relevant to evaluations that we’ll comment on more later:

The problem is, it can be difficult — and expensive — to measure whether charitable programs are actually working, and most nonprofits aren’t willing to devote scarce resources to collecting such information.

Most federal programs have in effect chosen a tradeoff: they provide more money and almost no real auditing. This is because real auditing is expensive and generally not worthwhile unless a blogger or journalist takes a picture of an organization’s Executive Director in a shiny new Ferrari. To really figure out what an organization is doing with $500,000 or $1,000,000 would cost so much in compliance that it would come to represent an appreciable portion of the grant: thus, the hope and pray method becomes the de facto standard (more on that below).

The writers also are pressed for space or don’t fully grok nonprofit evaluations, because they write:

Philanthropy advisers suggest first asking nonprofits about their goals and strategies, and which indicators they use to monitor their own impact. Givers should see how the charity measures its results both in the short term — monthly or quarterly — and over a period of years.

Measuring results isn’t a bad idea if it can be done, but the reason such measurements often don’t occur is precisely because they’re hard. Even if they do occur, you’re asking the organization to set its own goal marker—which makes them easy to set at very, ahem, modest, levels. If you set them at higher levels, the measurement problems kick in.

If you’re going to decide whether an after school program for middle-schoolers is effective, you’ll have to get a cohort together, randomly divide them into those who receive services and those who don’t, and then follow them through much of their lives—in other words, you have to direct a longitudinal study, which is expensive and difficult. That way, you’ll know if the group who received services were more likely to graduate from high school, attend college, get jobs, and the like. But even if you divide the group in two, you can still have poisoned data because if you rely on those who present for services, you’re often getting the cream of the high-risk/low-resource crop. You have numerous other confounding factors like geography and culture and the like.

The research can be far more costly than the project, and as little as donors like not knowing whether their money is effective, they’re going to like it even less if you spend 50 — 80% of the project on evaluating it. This is why the situation donors say they want to change is likely to persist regardless of what is reported.


EDIT: We wrote another, longer post on evaluations here.

Leave a Reply

Your email address will not be published. Required fields are marked *