Posted on Leave a comment

Philanthropy is not being disrupted by Silicon Valley

The Atlantic writes that “Silicon Valley Has Disrupted Philanthropy.” A lovely article, except for one minor issue: Silicon Valley has not “disrupted” philanthropy. The evidence presented for the article’s thesis is an anecdote from a Boys & Girls Club, “a 2016 report about Silicon Valley philanthropy written by two women who run a consulting firm that works with nonprofits and donors” (we could write similar reports), and this:

The Silicon Valley Children’s Fund, which also works with foster youth, has contracted with a marketing firm that will help it “speak in the language of business and metrics,” Melissa Johns, the organization’s executive vice president, told me.

There are a few other anecdotes, too, though these anecdotes don’t even rise to the level of “How to lie with statistics.” The author, Alana Semuels, is likely correct that some nonprofits have learned to adjust their proposals to use the language of data and metrics. She’s also correct that “rising housing prices in Silicon Valley mean increased need for local services, and more expensive operations for nonprofits, which have to pay staff more so they can afford to live in the area.” But the solution to that is zoning reform, not philanthropy, as anyone who is data- and knowledge-driven will soon discover.

Still, it’s possible that philanthropists will eventually adopt the tenets of effective altruism en masse. But I doubt it. Some reasons for my doubt can be seen in “Foundations and the Future,” a post in 2008 that was accurate but not especially prescient, because it points to features in human nature. In the ten year since I wrote that post, we’ve seen little substantive change in foundations. Other reasons can be seen in Robin Hanson and Kevin Simler’s book, The Elephant in the Brain: Hidden Motives in Everyday Life; the chapter on charity explains how most donors are most interested in feeling good about themselves and raising their status in the eyes of their peers. Most donors don’t care deeply about effectiveness (although they do care about appearing to care about effectiveness), and caring deeply about effectiveness often invites blowback about donors being hard-hearted scrooges instead of generous benefactors. What do you mean, you want to audit all of our program for effectiveness? You don’t just TRUST us? No one else wants to do this. Fine, if you must, you can, but I find it improper that you are so skeptical of our good works… you can see the youth we’re helping! They’re right here! Look into their eyes! You can tell me all you want about data, but I know better.

The real world of nonprofits and motivation is quite different than the proposal world. It’s also easier, far easier, to write about doing comprehensive cost-benefit analyses than it is to actually do epistemically rigorous cost-benefit analyses. I know in part because I’ve written far more descriptions of cost-benefit analyses than have actually been performed in the real world.

It’s not impossible to do real evaluations of grant-funded programs—it’s just difficult and time-consuming. And when I say “difficult,” I don’t just mean “difficult because it costs a lot” or “difficult because it’s hard to implement.” I mean conceptually difficult. Very few people deeply understand statistics sufficiently to design a true evaluation program. Statistics and regression analyses are so hard to get right that there’s a crisis going on in psychology and other social sciences over replication—that is, many supposed “findings” in the social sciences are probably not true or are due to random chance. If you’d like to read about it, just Google the phrase “replication crisis,” and you’ll find an infinite amount of description and commentary.

Medicine has seen similar problems, and John Ioannidis is the figure most associated with foregrounding the problem. In medicine, the stakes are particularly high, and even there, many supposed studies defy replication.

The point is that if most accomplished professors, who have a lot at stake in terms of getting the data right, do not or cannot design or implement valid, rigorous studies, it’s unlikely that many nonprofits will, either. And, on top of that, it’s unlikely that most donors actually want such studies (though they will say they want such studies, as noted previously).

To be sure, lest my apparent cynicism overwhelm, I applaud the goal of more rigorously examining the efficaciousness of foundation-funded programs. I think effective altruism is a useful movement and I’d like to see more people adopt it. But I’m also aware that the means used to measure success are quickly going to be gamed by nonprofits, if they aren’t already. If a nonprofit hired me to write a whiz-bang report about how Numbers and Statistics show their program is a raging success, I’d take the job. I know the buzzwords and know just how to structure such a document. And if I didn’t do it, someone else would. A funder would need strong separation between the implementing organization, the evaluating organization, and the participants in order to have any shot at really understanding what a grant-funded program is likely to do.

It’s much easier for both nonprofits and funders to conduct cargo-cult evaluations, declare the program a success, and move on, than it is to conduct a real, thorough evaluation that is likely to be muddled, show inconclusive results, and reduce the good feelings of all involved.* Feynman wrote “Cargo-Cult Science” in 1974, long before The Elephant in the Brain, but I think he would have appreciated Simler and Hanson’s book. He knew, intuitively, that we’re good at lying to ourselves—especially when there’s money on the line.


* How many romantic relationships would survive radical honesty and periodic assessments by disinterested, outside third-parties? What should we learn from the fact that there is so little demand for such a service?

Leave a Reply

Your email address will not be published. Required fields are marked *