Posted on Leave a comment

Write proposals for grant reviewers, not your colleagues or bureaucratic peers

We’ve written many times in many ways that the golden rule of grant writing is “he who has the gold makes the rules“—and that means organizations that want to be funded should follow the funding guidelines as closely as possible. While “follow the funding guidelines” seems like an obvious point, a temptation frequently arises to write not to the funder, and the reviewers, but to one’s peers in an organization, real or imagined. Don’t fall into this trap—your audience is always the reviewers.

That temptation arises when the writer or editor fears what their peers may think, or how their peers, supervisors, board members, city council people, etc, conceptualize the organization or project. Giving in and targeting that audience causes the writer to shift the focus from the proposal and funder at hand—often fatally. The writer may lose the ability to put the most important thing in the first sentence. The writer will forget that, even with grant reviewers, attention is a valuable, rare resource:

You don’t have your reader’s attention very long, so get to the point. I found it was very difficult to get even really smart businesspeople to get to the point. Sometimes it was because they really couldn’t tell you what the point was.

We can, and will, tell you what the point is—which we do whenever we write a proposal. We try to use the reader’s attention as best we can. But when the audience shifts from the reviewer to some other audience, the coherence and quality of the proposal often drops. Don’t do this.

(As you may imagine, we’ve seen many examples in which clients forgot to write for the reviewer, but we can’t cite specifics here. Nonetheless, if you find yourself thinking, “What is the Board going to think about this description?” instead of “What is the funder going to think about this?”, you are entering the danger zone.)

Posted on Leave a comment

Yours is not the only organization that isn’t worried about long-term grant evaluations

Ten years ago, in “Studying Programs is Hard to Do: Why It’s Difficult to Write a Compelling Evaluation,” we explained why real program evaluations are hard and why the overwhelming majority of grant-funded programs don’t demand them; instead, they want cargo cult evaluations. Sometimes, real, true evaluations or follow-up data for programs like YouthBuild are actively punished:

As long as we’re talking about data, I can also surmise that the Dept. of Labor is implicitly encouraging applicants to massage data. For example, existing applicants have to report on the reports they’ve previously submitted to the DOL, and they get points for hitting various kinds of targets. In the “Placement in Education or Employment” target, “Applicants with placement rates of 89.51% or higher will receive 8 points for this subsection,” and for “Retention in Education or Employment,” Applicants with retention rates of 89.51% or higher will receive 8 points for this subsection.” Attaining these rates with a very difficult-to-reach population is, well, highly improbable.

That means a lot of previously funded applicants have also been. . . rather optimistic with their self-reported data.

To be blunt, no one working with the hard-to-serve YouthBuild population is going to get 90% of their graduates in training or employment. That’s just not possible. But DOL wants it to be possible, which means applicants need to find a way to make it seem possible / true.

So. That brings us to a much more serious topic, in the form of “The Engineer vs. the Border Patrol: One man’s quest to outlaw Customs and Border Protection’s internal, possibly unconstitutional immigration checkpoints,” which is a compelling, beautiful, and totally outrageous read. It is almost impossible to read that story and not come away fuming at the predations of the Border Patrol. Leaving that aspect aside, however, this stood out to me:

Regarding Operation Stonegarden, the DHS IG issued a report in late 2017 that was blunt in its assessment: “FEMA and CBP have not collected reliable program data or developed measures to demonstrate program performance resulting from the use of more than $531.5 million awarded under Stonegarden since FY 2008.”

Even in parts of government where outcomes really matter, it’s possible to have half a billion dollars disappear, and, basically, no one cares. If FEMA can lose all that money and not even attempt to measure whether the money is being spent semi-effectively, what does that communicate to average grant-funded organizations that get a couple of hundred thousand dollars per year?

We’re not telling you to lie in evaluation sections of your proposal. But we are reminding you, as we often do, about the difference between the real world and the proposal world. What you do with that information is up to you.