Posted on Leave a comment

Philanthropy is not being disrupted by Silicon Valley

The Atlantic writes that “Silicon Valley Has Disrupted Philanthropy.” A lovely article, except for one minor issue: Silicon Valley has not “disrupted” philanthropy. The evidence presented for the article’s thesis is an anecdote from a Boys & Girls Club, “a 2016 report about Silicon Valley philanthropy written by two women who run a consulting firm that works with nonprofits and donors” (we could write similar reports), and this:

The Silicon Valley Children’s Fund, which also works with foster youth, has contracted with a marketing firm that will help it “speak in the language of business and metrics,” Melissa Johns, the organization’s executive vice president, told me.

There are a few other anecdotes, too, though these anecdotes don’t even rise to the level of “How to lie with statistics.” The author, Alana Semuels, is likely correct that some nonprofits have learned to adjust their proposals to use the language of data and metrics. She’s also correct that “rising housing prices in Silicon Valley mean increased need for local services, and more expensive operations for nonprofits, which have to pay staff more so they can afford to live in the area.” But the solution to that is zoning reform, not philanthropy, as anyone who is data- and knowledge-driven will soon discover.

Still, it’s possible that philanthropists will eventually adopt the tenets of effective altruism en masse. But I doubt it. Some reasons for my doubt can be seen in “Foundations and the Future,” a post in 2008 that was accurate but not especially prescient, because it points to features in human nature. In the ten year since I wrote that post, we’ve seen little substantive change in foundations. Other reasons can be seen in Robin Hanson and Kevin Simler’s book, The Elephant in the Brain: Hidden Motives in Everyday Life; the chapter on charity explains how most donors are most interested in feeling good about themselves and raising their status in the eyes of their peers. Most donors don’t care deeply about effectiveness (although they do care about appearing to care about effectiveness), and caring deeply about effectiveness often invites blowback about donors being hard-hearted scrooges instead of generous benefactors. What do you mean, you want to audit all of our program for effectiveness? You don’t just TRUST us? No one else wants to do this. Fine, if you must, you can, but I find it improper that you are so skeptical of our good works… you can see the youth we’re helping! They’re right here! Look into their eyes! You can tell me all you want about data, but I know better.

The real world of nonprofits and motivation is quite different than the proposal world. It’s also easier, far easier, to write about doing comprehensive cost-benefit analyses than it is to actually do epistemically rigorous cost-benefit analyses. I know in part because I’ve written far more descriptions of cost-benefit analyses than have actually been performed in the real world.

It’s not impossible to do real evaluations of grant-funded programs—it’s just difficult and time-consuming. And when I say “difficult,” I don’t just mean “difficult because it costs a lot” or “difficult because it’s hard to implement.” I mean conceptually difficult. Very few people deeply understand statistics sufficiently to design a true evaluation program. Statistics and regression analyses are so hard to get right that there’s a crisis going on in psychology and other social sciences over replication—that is, many supposed “findings” in the social sciences are probably not true or are due to random chance. If you’d like to read about it, just Google the phrase “replication crisis,” and you’ll find an infinite amount of description and commentary.

Medicine has seen similar problems, and John Ioannidis is the figure most associated with foregrounding the problem. In medicine, the stakes are particularly high, and even there, many supposed studies defy replication.

The point is that if most accomplished professors, who have a lot at stake in terms of getting the data right, do not or cannot design or implement valid, rigorous studies, it’s unlikely that many nonprofits will, either. And, on top of that, it’s unlikely that most donors actually want such studies (though they will say they want such studies, as noted previously).

To be sure, lest my apparent cynicism overwhelm, I applaud the goal of more rigorously examining the efficaciousness of foundation-funded programs. I think effective altruism is a useful movement and I’d like to see more people adopt it. But I’m also aware that the means used to measure success are quickly going to be gamed by nonprofits, if they aren’t already. If a nonprofit hired me to write a whiz-bang report about how Numbers and Statistics show their program is a raging success, I’d take the job. I know the buzzwords and know just how to structure such a document. And if I didn’t do it, someone else would. A funder would need strong separation between the implementing organization, the evaluating organization, and the participants in order to have any shot at really understanding what a grant-funded program is likely to do.

It’s much easier for both nonprofits and funders to conduct cargo-cult evaluations, declare the program a success, and move on, than it is to conduct a real, thorough evaluation that is likely to be muddled, show inconclusive results, and reduce the good feelings of all involved.* Feynman wrote “Cargo-Cult Science” in 1974, long before The Elephant in the Brain, but I think he would have appreciated Simler and Hanson’s book. He knew, intuitively, that we’re good at lying to ourselves—especially when there’s money on the line.


* How many romantic relationships would survive radical honesty and periodic assessments by disinterested, outside third-parties? What should we learn from the fact that there is so little demand for such a service?

Posted on Leave a comment

March Links: Reinventing Philanthropy, Bureaucrats in Action, Urbanism and Environment, Abstinence Education on Valentine’s Day, and More

* Google Finds It Hard to Reinvent Philanthropy. Seliger + Associates unsurprised.

* Bureaucrat acts like a jerk and attempts to silence smart guy. News at 11:00.

* Southwest Airlines pilot holds plane for murder victim’s family. Wow.

* Why you should never, ever use two spaces after a period.

* FYI: US manufacturing still tops China’s by nearly 46 percent, at least as measured by dollar output.

* The most amusing recent grant title: “Developing High-Throughput Assays for Predictive Modeling of Reproductive and Developmental Toxicity Modulated Through the Endocrine System or Pertinent Pathways in Humans and Species Relevant to Ecological Risk Assessment.” Say it three times fast.

* You Can’t Be Against Dense, Urban Development and Consider Yourself an Environmentalist.

* An important rap song: Julian Smith’s “I’m Reading a Book,” complete with bagpipes at the end.

* The Facebook fast, with lessons learned.

* What do twin adoption studies show?

* The quality of nonfiction versus fiction. We don’t think it matters much what grant writers read as long as they do read.

* Slate’s review of Tyler Cowen’s The Great Stagnation: “[. . . ] The Great Stagnation makes an ambitious argument whose chief present advantage (and greatest eventual liability) is that it’s impossible to assess in real time.” This might be the most important book of the year, and at the very least is dense with argument and novel thought.

* Why I don’t care very much about tablets anymore, from Jon Stokes, except I never did care about tablets in the first place. A sample:

A Google Image search turns up the above, quite typical picture of a scribe practicing his art. You’ll notice that the scribe’s desk contains two levels, where the topmost level holds an exemplar document and the bottom holds the document that he’s actually working on. The scribe in the picture could be a copyist who’s making a copy of the exemplar, or he could be a writer who’s using the top copy as a source or reference. Either way, his basic work setup is the same as my modern monitor plus keyboard setup, in that it’s vertically split into two planes, with the top plane being used for display and the bottom plane being used for input.

The key here is that the scribe’s hands aren’t in the way of his display, and neither are mine when I work at my desktop or laptop. My hands rest on a keyboard, comfortably out of sight and out of mind.

With a tablet, in contrast, my rather large hands end up covering some portion of the display as I try to manipulate it. In general, it’s less optimal to have an output area that also doubles as an input area. This is why the mouse and keyboard will be with us for decades hence—because they let you keep your hands away from what you’re trying to focus on.

When you write a full proposal on a tablet, let me know. And not just as a stunt to say, “I could do it,” either.

* Why publishers are scared of ebooks—the standard reasons and Amanda Hocking as symbol.

* In an amusing twist, Texas publishes its Abstinence Education Services RFP on Valentine’s Day.

Posted on 1 Comment

What Does a Grant Proposal Look Like Exactly? 13 Easy Steps to Formatting a Winning Proposal

I was having dinner with some friends who are consultants for a multinational company, and they wanted to know who handles the “graphics” in our proposals. They are used to preparing elaborate business presentations and were startled to learn that the proposals we prepare are usually simple text documents. That got me thinking about how proposal styles have come full circle and we’ve gone Back to the Future, thanks largely to digital submission requirements.

When dinosaurs walked the earth and I started writing proposals in the early 1970s, I literally wrote them—long hand on legal pads. When I was finished, I would either type them, or, if I was lucky enough to be working for an agency that had a secretary, the proposals would be typed for me. In either case, the proposals more or less looked like ransom notes, with blotchy corrections (anyone old enough to remember Liquid Paper?) and virtually no formatting, except for using tabs and the hyphen key (—–) to create separator lines. The good news was that proposals were much shorter, since they were so hard to physically produce. Eventually I moved up the managerial food chain and earned a secretary with short hand skills. I got pretty good at dictating proposals, but they still looked pretty much like high school term papers when typed.

Flash forward to more “thrilling days of yesteryear” (which started every episode of my favorite TV shows as a kid: The Lone Ranger), when we were starting our business in 1993 and PCs had come of age. We began producing fairly elaborate proposals, with color covers, pie charts, embedded org charts and flow diagrams (using Object Linking and Embedding technology), comb binding and professional appearance. We kept upgrading our color printers and the proposals were getting pretty slick as we mastered the art of formatting.

Now enter The Time Tunnel with me (another guilty TV pleasure from the ’60s) again and emerge around 2001, when we ran into digital submissions. The Feds rolled out two different digital submission platforms, finally settling on grants.gov, while state/local agencies and foundations came up with endless variations. Given the vagaries of the divers digital submission systems, however, we soon learned that there was little point in dressing up our proposals, since the chance of file corruption was simply too great. The formatting party stopped, and once again our proposals are simple text documents, stripped of the bells and whistles. Yes, I know Acrobat can be used to tart-up proposals, but one dirty little secret is that most digital submissions are not reviewed digitally, but are printed and xeroxed—so much for saving trees—and Acrobat does not always faithfully reproduce the original formatting. This is a potential sink-the-ship problem when, for example, there are page limits.

So, in this age of digital submissions, what should a proposal look like? Simple and neat is the best approach. Here are some tips to make sure that your proposals are easy to read and look great:

  • Read the RFP carefully for formatting instructions and follow them precisely. For example, if the RFP says the proposal is to be double spaced, and does not make an exception for tables, double space all tables, no matter how silly this looks. The Department of Education, for example, will often reject proposals for non-compliance for just such nitpicking instructions.
  • It is generally not a good idea to bind or staple proposals, unless otherwise directed in the RFP (e.g., sometimes a 3-ring binder will be required). Instead, fasten with a binder clip or rubber bands.
  • If you want to use a cover page, keep the fonts and colors subdued. An agency logo is a nice touch, but skip the photos unless they are highly evocative.
  • Make sure you put the agency name and program title/RFP number in the header on each page. Make sure they are right.
  • Avoid odd fonts and stick with Times New Roman when space is an issue or Arial if you have lots of room. The new default font for Microsoft Word, Cambria, is probably also okay.
  • Learn to love outlines. If the RFP has an outline format, reproduce it. If not, develop a simple outline format of your own, indenting .2 or .25 inches as the outline descends. It is easy to do this in Word by using paragraph styles. Make Outline 1 “A” with no indent, Outline 2, “1” with a .2 indent, Outline 3 “a” with .4 indent and so forth.
  • Never use the tab key or multiple spaces for indentation purposes. Just set up additional paragraph styles to align text paragraphs with outline styles (see above).
  • Use tables, rather than charts, unless you are positive the reviewers will not be xeroxing the proposal. Also, it is generally not worth the time to format charts. Instead, put your time into research and writing.
  • Avoid bold, ALL CAPS, underlining and other forms of text screaming, with the exception of bolding/underlining the start of outlined/bulleted section. If your words are good enough, the reader will get the idea, and, if they’re not, all the bolding in the world won’t matter.
  • We prefer justified text, but some may disagree on stylistic grounds.
  • Do not try to squeeze extra words in by kerning the text or narrowing the margins. This will simply make the proposal hard to read, which is not a good idea, since you want reviewers to savor every golden word. We almost never use less than one inch margins all around or tighten the text.
  • Place footnotes at the bottom of each page or on a literature citation page, which is easily done in Word.
  • Finally, buy a sequentially numbering stamp and paginate each page. This way, when the reviewers drop the proposal on the floor, it can be reassembled. This also helps when creating a table of contents.

There you have it—13 easy steps to proposal formatting. Simple, clean, and consistent are your best friends with formatting, because they help the formatting get out of the way of what matters: the text. Now, go forth and write.

Posted on 1 Comment

More on Charities

A previous post linked to a Wall Street Journal post on charities; now the paper released a full article (may not be accessible to non-subscribers) on the subject of how donors evaluate the usefulness of a program, arguing that donors are becoming more engaged in measurement. One thing missing: statistics showing this is actually part of a trend, rather than just a collection of anecdotes. The article is more descriptive of the practices around how to evaluate effectiveness and uses hedge words:

Wealthy people and foundations sometimes hire philanthropy consultants to help them gauge a charity’s effectiveness. But other donors who seek that kind of analysis usually have had to rely on guesswork or do it themselves, which makes it tough to figure out whether one approach to solving a problem is better than another.

“Sometimes” they hire consultants, other times they essentially use the hope and pray method. That’s not terribly different from how things have always been done. Most interesting, however, is a topic relevant to evaluations that we’ll comment on more later:

The problem is, it can be difficult — and expensive — to measure whether charitable programs are actually working, and most nonprofits aren’t willing to devote scarce resources to collecting such information.

Most federal programs have in effect chosen a tradeoff: they provide more money and almost no real auditing. This is because real auditing is expensive and generally not worthwhile unless a blogger or journalist takes a picture of an organization’s Executive Director in a shiny new Ferrari. To really figure out what an organization is doing with $500,000 or $1,000,000 would cost so much in compliance that it would come to represent an appreciable portion of the grant: thus, the hope and pray method becomes the de facto standard (more on that below).

The writers also are pressed for space or don’t fully grok nonprofit evaluations, because they write:

Philanthropy advisers suggest first asking nonprofits about their goals and strategies, and which indicators they use to monitor their own impact. Givers should see how the charity measures its results both in the short term — monthly or quarterly — and over a period of years.

Measuring results isn’t a bad idea if it can be done, but the reason such measurements often don’t occur is precisely because they’re hard. Even if they do occur, you’re asking the organization to set its own goal marker—which makes them easy to set at very, ahem, modest, levels. If you set them at higher levels, the measurement problems kick in.

If you’re going to decide whether an after school program for middle-schoolers is effective, you’ll have to get a cohort together, randomly divide them into those who receive services and those who don’t, and then follow them through much of their lives—in other words, you have to direct a longitudinal study, which is expensive and difficult. That way, you’ll know if the group who received services were more likely to graduate from high school, attend college, get jobs, and the like. But even if you divide the group in two, you can still have poisoned data because if you rely on those who present for services, you’re often getting the cream of the high-risk/low-resource crop. You have numerous other confounding factors like geography and culture and the like.

The research can be far more costly than the project, and as little as donors like not knowing whether their money is effective, they’re going to like it even less if you spend 50 — 80% of the project on evaluating it. This is why the situation donors say they want to change is likely to persist regardless of what is reported.


EDIT: We wrote another, longer post on evaluations here.