Tag Archives: Research

Charrettes live: Cite them as a planning tool in your proposal

Ten years ago we advised that grant writers and nonprofit Executive Directors “know your charrettes!” (the exclamation point is in the original title). Since then, though, we’ve heard less about charrettes than we really should. Until this week, that is, when charrettes hit me from two separate angles. The first is from Steven Berlin Johnson’s book Farsighted: How We Make the Decisions That Matter the Most. The book itself is highly recommended; large swaths of it could make their way into many proposals.* This passage, though long, has special resonance for me:

A few years ago, the water authority in the Greater Vancouver region faced a decision not unlike the one that confronted the citizens of New York two hundred years ago as to the fate of Collect Pond. A growing urban population meant that the region’s existing freshwater sources were not going to be able to meet demand in the coming years. New sources would have to be tapped, with inevitable impact on local environment, commerce, and communities. The city’s home in the rainy Pacific Northwest gave it the luxury of many potential options: three reservoirs could be expanded, new pipelines could be built to a number of distant lakes, or wellfields could be drilled along one prominent river. Like filling or preserving Collect Pond, this was a decision whose consequences would likely persist for more than a century. (Water from the Capilano River, for instance, was first delivered to Vancouver residents in the late 1800s, and continues to be a major water source for the city.) But this decision began with an earnest attempt to model all the important variables from a full-spectrum perspective. It built that model by consulting a wide range of stakeholders, each contributing a different perspective on the problem at hand: local residents living near each of the water sources being considered; indigenous people with sacred ties to the land being surveyed; environmental activists and conservationists; health and water-safety regulators; even local citizens who used the various bodies of water for boating, fishing, or other water sports. Stakeholders evaluated each option for its impact on a wide range of variables: “aquatic habitat, terrestrial habitat, air quality, visual quality, employment, recreation, traffic and noise, and property values.”

The approach taken by the Vancouver Water Authority has become commonplace in many important land use and environmental planning deliberations. The techniques used to bring those different voices together vary depending on the methodologies embraced by the planners (or the consultants they have hired to help run the process). But they share a core attribute: a recognition that mapping a decision as complex as establishing new sources of drinking water for a metropolitan center requires a network of diverse perspectives to generate anything resembling an accurate map of the problem. The most common term for this kind of collaborative deliberation is a “charrette.” The word derives from the French word for wagon; apparently architecture students at the École des Beaux-Arts in the 1800s would deposit their scale models and drawings in a small wagon that would be wheeled out to collect student submissions as the deadline for a project approached. Students making last-minute tweaks to their projects were said to be working en charrette—adding the finishes touches as the wagon made its rounds. In its modern usage, though, the design charrette does not refer to a last-minute cram session, but rather to an open, deliberative process where different stakeholders are invited to critique an existing plan, or suggest new potential ideas for the space or resource in question. The charrette makes it harder for a complex decision to be evaluated purely from the narrowband perspective of a single business group or government agency.

One way in which charrettes differ from the more traditional forum of a community meeting is that they conventionally take the form of a series of small-group meetings, not one large gathering. Keeping the groups separate reduces the potential for open conflict between groups that have competing values, of course, but it also generates a more diverse supply of ideas and assessments in the long run.

The term “charrette” is under-used today, even though many RFPs include planning process questions, which can be best responded to by describing a charrette-like process. I’m not sure whether I’ll quote this passage directly in future proposals, or quote small sections and paraphrase the rest, but I’m confident the concepts will appear.

The second way charrettes arrived came from a client, who said that her organization was founded following a series of local planning charrettes. We’ve rarely heard origin stories like this; most nonprofits start the same way businesses do, when an individual or small group of people create a nonprofit corporation and file for a 501(c)3 letter. The charrette structure is unusual, and it struck me because it’s so rarely used. Too rarely used, one could say. An organization with that kind of origin story should flaunt the story. Which we, being good grant writers, will.


* Remember that reading is one of the open secrets of grant writing. Read a lot and incorporate what you find into your proposals.

“Your methods are unorthodox”

As GWC readers know, getting information about state and local grants is often tricky. Every state and municipality is different, and, like foundations, few if any make any effort at standardization or the user experience; most just assume that the usual suspects will apply for grants, and consequently they end up forming de facto cartels. In theory, too, all government grant information is also public information, but that’s a little like the theory that DMV employees are public servants who work on behalf of taxpayers: connecting theory to practice can be hard or nonexistent—naive visitors to the DMV learn.

Anyway. I spent some time attempting to get into the Wisconsin “Division of Public Health Grants and Contracting (GAC) Application” page, which is stashed behind a password wall for no reason I can discern. In the process I ended up emailing “Yvette A Smith,” a contracting specialist, to request access, and in reply, she told me that “Your request is unorthodox.” While not quite as good as “Your methods are unsound,” I did actually laugh out loud; I do like to imagine I’m the grant-world equivalent of Captain Willard talking to Colonel Kurtz in Apocalypse Now.

And Yvette is right: our methods are unorthodox and we do disturb the fabric of the grant/proposal world. That’s part of the reason we’re effective.

Still, I had no idea that there’s an orthodoxy in the State of Wisconsin. And if there is, what is that orthodoxy? Is it John 16:10 that describes how users should access GAC Application information? Or does orthodoxy emerge from other texts?

Alas, I didn’t inquire that far, and I also never quite got access to the GAC Application Page, but I was able to find the information I needed elsewhere. Still, I did learn just a little about the quality of governance in Wisconsin. A famous paper looks at “Cultures of Corruption: Evidence From Diplomatic Parking Tickets,” and the authors find that “diplomats from high corruption countries (based on existing survey-based indices) have significantly more parking violations, and these differences persist over time.” I wonder if my own experiences interacting with local and state governments are similar: the worse the quality of random bureaucrats, the worse the overall level of governance.

Good needs assessments tell stories: Data is cheap and everyone has it

If you only include data in your needs assessment, you don’t stand out from dozens or hundreds of other needs assessments funders read for any given RFP competition. Good needs assessments tell stories: Data is cheap and everyone has it, and almost any data can be massaged to make a given target area look bad. Most people also don’t understand statistics, which makes it pretty easily to manipulate data. Even grant reviewers who do understand statistics rarely have the time to deeply evaluate the claims made in a given proposal.*

Man is The Storytelling Animal, to borrow the title of Jonathan Gottschall’s book. Few people dislike stories and many of those who dislike stories are not neurologically normal (Oliver Sacks writes movingly of such people in his memoir On the Move). The number of people who think primarily statistically and in data terms is small, and chances are they don’t read social and human service proposals. Your reviewer is likely among the vast majority of people who like stories, whether they want to like stories or not. You should cater in your proposal to the human taste for stories.

We’re grant writers, and we tell stories in proposals for the reasons articulated here and other posts. Nonetheless, a small number of clients—probably under 5%—don’t like this method (or don’t like our stories) and tell us to take out the binding narrative and just recite data. We advise against this, but we’re like lawyers in that we tell our clients what we think is best and then do what our clients tell us to do.

RFPs sometimes ask for specific data, and, if they do, you should obviously include that data. But if you have any room to tell a story, you should tell a story about the project area and target population. Each project area is different from any other project area in ways that “20% of the project area is under 200% of the Federal Poverty Line (FPL)” does not capture. A story about urban poverty is different from a story about recent immigration or a story about the plant closing in a rural area.

In addition, think about the reviewers’ job: they read proposal after proposal. Every proposal is likely to cite similar data indicating the proposed service area has problems. How is the reviewer supposed to decide that one area with a 25% poverty rate is more deserving than some other area with a 23% poverty rate?

Good writers will know how to weave data in story, but bad writers often don’t know they’re bad writers. A good writer will also make the needs assessment internally consistent with the rest of the proposal (we’ve written before “On the Importance of Internal Consistency in Grant Proposals“). Most people think taste is entirely subjective, for bad reasons that Paul Graham knocks down in this excellent essay. Knowing whether you’re a good writer is tough because you have to know good writing to know you’re a bad writer—which means that, paradoxically, bad writers are incapable of knowing they’re bad writers (as noted in the first sentence of this paragraph).

In everyday life, people generally counter stories with other stories, rather than data, and one way to lose friends and alienate people is to tell stories that move against the narrative that someone wants to present. That’s how powerful stories are. For example, “you” could point out that Americans commonly spend more money on pets than people in the bottom billion spend on themselves. If you hear someone contemplating or executing a four- or five-figure expenditure on a surgery for their dog or cat, ruminate on how many people across the world can’t afford any surgery. The number of people who will calmly think, “Gee, it’s telling that I value the life of an animal close at hand more than a human at some remove” is quite small relative to the people who say or think, “the person saying this to me is a jerk.”

As you might imagine, I have some firsthand investigative experience in matters from the preceding paragraph. Many people acquire pets for emotional closeness and to signal their kindness and caring to others. The latter motive is drastically undercut when people are consciously reminded that many humans don’t have the resources Americans pour into animals (consider a heartrending line from “The Long Road From Sudan to America:” “Tell me, what is the work of dogs in this country?”).

Perhaps comparing expenditures on dogs versus expenditures on humans is not precisely “thinking statistically,” but it is illustrative about the importance of stories and the danger of counter-stories that disrupt the stories we desperately want to tell about ourselves. Reviewers want stories. They read plenty of data, much of it dubiously sourced and contextualized, and you should give them data too. But data without context is like bread instead of a sandwich. Make the reviewer a sandwich. She’ll appreciate it, especially given the stale diet of bread that is most grant proposals.


* Some science and technical proposals are different, but this general point is true of social and human services.

The unsolvable standardized data problem and the needs assessment monster

Needs assessments tend to come in two flavors: one basically instructs the applicant to “Describe the target area and its needs,” and the applicant chooses whatever data it can come up with. For most applicants that’ll be some combination of Census data, local Consolidated Plan, data gathered by the applicant in the course of providing services, news stories and articles, and whatever else they can scavenge. Some areas have well-known local data sources; Los Angles County, for example, is divided into eight Service Planning Areas (SPAs), and the County and United Way provide most data relevant to grant writers by SPA.

The upside to this system is that applicants can use whatever data makes the service area look worse (looking worse is better because it indicates greater need). The downside is that funders will get a heterogeneous mix of data that frequently can’t be compared from proposal to proposal. And since no one has the time or energy to audit or check the data, applicants can easily fudge the numbers.

High school dropout rates are a great example of the vagaries in data work: definitions of what constitutes a high school dropout vary from district to district, and many districts have strong financial incentives to avoid calling any particular student a “dropout.” The GED situation in the U.S. makes dropout statistics even harder to understand and compare; if a student drops out at age 16 and gets a GED at 18 is he a dropout or a high school graduate? The mobility of many high-school age students makes it harder still, as does the advent of charter schools, on-line instruction and the decline of the neighborhood school in favor of open enrollment policies. There is no universal way to measure this seemingly simple number.*

The alternative to the “do whatever” system is for the funder to say: You must use System X in manner Y. The funder gives the applicant a specific source and says, “Use this source to calculate the relevant information.” For example, the last round of YouthBuild funding required the precise Census topic and table name for employment statistics. Every applicant had to use “S2301 EMPLOYMENT STATUS” and “S1701 POVERTY STATUS IN THE PAST 12 MONTHS,” per page 38 of the SGA.

The SGA writers forgot, however, that not every piece of Census data is available (or accurate) for every jurisdiction. Since I’ve done too much data work for too many places, I’ve become very familiar with the “(X)” in American Factfinder2 tables—which indicates that the requested data is not available.

In the case of YouthBuild, the SGA also specifies that dropout data must be gathered using a site called Edweek. But dropout data can’t really be standardized for the reasons that I only began to describe in the third paragraph of this post (I stopped to make sure that you don’t kill yourself from boredom, which would leave a gory mess for someone else to clean up). As local jurisdictions experiment with charter schools and online education, the data in sources like Edweek is only going to become more confusing—and less accurate.

If a YouthBuild proposal loses a few need points because of unavailable or unreliable data sources, or data sources that miss particular jurisdictions (as Edweek does) it probably won’t be funded, since an applicant needs almost a perfect score to get a YouthBuild grant. We should know, as we’ve written at least two dozen funded YouthBuild proposals over the years.

Standardized metrics from funders aren’t always good, and some people will get screwed if their projects don’t fit into a simple jurisdiction or if their jurisdiction doesn’t collect data in the same way as another jurisdiction.

As often happens at the juncture between the grant world and the real world, there isn’t an ideal way around this problem. From the perspective of funders, uniform data requirements give an illusion of fairness and equality. From the perspective of applicants trapped by particular reporting requirements, there may not be a good way to resolve the problem.

Applicants can try contacting the program officer, but that’s usually a waste of time: the program officer will just repeat the language of the RFP back to the applicant and tell the applicant to use its best judgment.

The optimal way to deal with the problem is probably to explain the situation in the proposal and offer alternative data. That might not work. Sometimes applicants just get screwed, and not in the way most people like to get screwed, and there’s little to be done about it.


* About 15 years ago, Isaac actually talked to the demographer who worked at the Department of Education on dropout data. This was in the pre-Internet days, and he just happened to get the guy who works on this stuff after multiple phone transfers. He explained why true, comprehensive dropout data is impossible to gather nationally, and some of his explanations have made it to this blog post.

No one ever talks to people who do stuff like this, and when they find an interested party they’re often eager to chat about the details of their work.

The Census During Hard Times: A Gift That Keeps On Giving

One of the best things that can happen to a grant writer is to have the Census roll around during a time of economic crisis, because decennial Census data hangs around for about ten years. It takes the Census Bureau around two years or so to publish the latest data, which then gets used until the next turn of the census screw. The “2010 Census” will really be used as the 2012–2022 Census.

While the Census Bureau and other data miners produce interim data, such data are mostly a hodgepodge of extrapolations, which is another word for educated guesses. It’s possible for a city or county to request a special mid-decade census, but it’s doubtful that many have the money for it, so grant writers are pretty much stuck with whatever the Census produces. It’s our job to craft compelling Needs Assessments, whether the data is good, bad or indifferent. The task becomes a lot easier when the data shows economic calamity.

Given the recent economic collapse, incomes will be down, poverty up, etc., in the 2010 Census for the kinds of target areas we usually write about. When the Census coincides with better times, such as the 2000 Census, it’s much harder to make the case that things are tough because incomes and so forth will be relatively high, but a good grant writer will make this case anyway, pointing out the lingering effects of the last recession, the coming recession, or the ever popular refrain, “the target area is an island of misery in a sea of prosperity.” But lousy census data means happy times for grant writers. The 2010 Census will be a case in point, as we will be using the dismal economic data to good effect until the year 2022 or so!

Being as old as mud, I started using census data from the 1970 Census. In 1978, I was hired as the Grant Writing Coordinator for the City of Lynwood, CA, which is located next to Compton and Watts in LA County. By the time I got to Lynwood, most residents were African American and very low-income, but one would never know it by looking at the 1970 Census data. The 1970 Census painted Lynwood as a largely middle class, white community, which it was when the Census was taken. Like its much better known neighbor, Compton, which has been immortalized in endless rap songs like N.W.A.’s “Straight Outta Compton,” Lynwood was the victim of blockbusting and turned almost overnight from white to Black. It’s just that Compton metamorphosed immediately after the Watts Rebellion and before the Census was taken. In contrast, Lynwood changed demographically just after the Census was taken. I left Lynwood before the 1980 Census data was taken, so I spent three years writing proposals in which I had to explain away the available census data. While annoying, this helped hone my grant writing skills.

One interesting factoid about the census is that, and as reported, albeit obliquely, by the Pew Research Center in Census History: Counting Hispanics, Hispanics were not actually counted until the 1980 Census and the questions relating to Hispanic status change each census cycle, making it very challenging to make the kind of comparisons that are the stuff of needs assessments. This is compounded by the fact that the Census Bureau does not consider “Hispanic” to be a race. One can be counted as a Hispanic of any number of races and, if all are added up, this can easily total more than 100% of the population. There are various work arounds, the easiest of which is to check with the local city or county to see if they have sorted out what the percentage of “Hispanics” is in their jurisdiction.

We are currently writing an OJJDP FY 2010 Youth Gang Prevention and Intervention Program proposal for a nonprofit in Southern California. The target area was largely middle class and white at the time of the 2000 Census but is now Hispanic and low-income. So, for me it’s 1978 again and I am struggling with same data issues I was in Lynwood.

As the wheel of time turns and grant writers must use out-of-date census data for at least two more years. Look on the bright side of things––data from the 2010 Census will be absolutely awful and you can use it to your advantage for many years to come.

Seliger + Associates Hitches Up the Wagons and Heads Out to Where the Pavement Turns to Sand

We’ve more or less completed our move to sunnier climes in Tucson, AZ. This is the fourth location for Seliger + Associates in 16 and a half years in business, starting in Danville, CA, before migrating to Bellevue and then Mill Creek, WA. So it’s goodbye to coffee and mold and hello to incredible Sonoran food and unlimited mountain/desert vistas. As Neil Young said in Thresher:

It was then I knew I’d had enough,
Burned my credit card for fuel
Headed out to where the pavement turns to sand

Faithful readers will remember that whenever I go on a road trip, a blog post integrating grant writing and my innate desire to see what is around the next curve follows. In preparation for the 1,700 mile drive, I read William Least Heat-Moon’s latest book-length paean to the American thirst for the open road, Roads to Quoz: An American Mosey (see Blue Highways: Reflections of a Grant Writer Retracing His Steps 35 Years Later for earlier thoughts on Least Heat-Moon’s Ur-travel essay Blue Highways).

Although we had planned to drive south on US Highways 95/395 through Oregon and California, an excellent blue highway route, our mover decided to drive like the World War II Red Ball Express down US 93 from Twin Falls, ID to Tucson—another great blue highway. He is better at his job than we are at his, so he was going to arrive before us, leaving us to the tender mercies of I-5. But all was not lost, as we were able to take CA 58 east from Bakersfield over the Tehachapis through the Mojave Desert, where we found our long lost US 95, going from Needles to Blythe on 100 miles of roller coaster two-lane highway before our own race to Tucson on I-10 through Tonopah, AZ. Least Heat-Moon would be proud. The upshot of this rambling paragraph is that, 35 years after seeing Lowell George and Little Feat perform for the birthday party of a minor LA celebrity a friend of mine knew at the celebrity’s Malibu “ranch” in 1974, I finally got to drive from Tehachapi to Tonopah, as immortalized in Little Feat’s Willin’:

I’ve been from Tuscon to Tucumcari
Tehachapi to Tonapah
Driven every kind of rig that’s ever been made
Driven the back roads so I wouldn’t get weighed
And if you give me: weed, whites, and wine
and you show me a sign
I’ll be willin’, to be movin’*

I only need to find time to drive from Tucson to Tucumcari for the circle to be complete.

This blather does have something to do with grant writing: as I have observed before, at their most basic level, grant writers are simply story tellers who often tell stories about places they have never seen. Long distance driving, preferably on blue highways, is an exceptional way to stay in touch with America—not the America of CNN or Fox News or the New York Times and Washington Post, but the America that is really being blasted by the Great Recession. As we rolled through small towns in the Central Valley and Mojave Desert, we saw endless Main Street desolation: forlorn vacant restaurants with fading “For Sale” signs, car dealers, either closed or with the few cars they had spread out across expanses of display lots with balloons tied to antennas in a sad attempt at normalcy, and, perhaps most troubling, piles of broken stuff—cars, appliances, farm equipment and mounds of unidentifiable crapola that apparently no one wants.

Perhaps no one cares enough to haul this junk away, or maybe there is no place to haul it to. Although politicians from Washington D.C. to Seattle chatter on about the “greening” of American and the importance of using resources wisely, to me it seems more like the “rusting” of America. Least Heat-Moon found the same disturbing panorama in Roads to Quoz, preventing him from seeing the scenery beyond the roadway:

Miles of abandoned buildings, of decaying house-trailers steadily vanishing under agglomerations of cast-off appliances, toys, rusted vehicles (autos, busses, riding mowers, tractors, trucks, a bulldozer, a crane, a forklift), and a plethora of cheap things.

Least Heat-Moon wrote about Oklahoma, which I discussed in the “Blue Highways” post noted above, but the junkification of rural America has worsened considerably in the last year, presumably because of the enervating effects of the recession. If any interested rural nonprofits are reading this, Project JUNC (Joint Undertaking to Negate Crap) would be a great Stimulus Bill grant concept; JUNC would train unemployed folks to pick up stuff, haul it, sort it, and recycle it. I’ll even provide a 20% discount to write the proposal because, like Least Heat-Moon, it’s hard to admire a 19th Century courthouse or church when you have to look past blocks of detritus. It pains me to see much of rural America being buried in kipple.**

I am about to write a HUD Neighborhood Stabilization Program 2 (NSP) proposal for a rural city in California, which involves rehab of vacant, foreclosed houses. Since endless newspaper stories describe how vacant houses get stripped of copper plumbing, appliances, etc., I was going to include this idea, as I usually do when writing about housing rehab, in the proposal. But my recent sojourn through rural OR, CA and AZ, gives me pause. Why would anyone bother breaking into a house to steal metal, when tons of metal are piled along rural roadways, there for the picking? This is a real world demonstration of how road trips benefit grant writers. Grant writer readers should get out of your Aerons, fire up your Prius (in my case, a BMW ragtop), and unleash your inner Kerouac by going On the Road.


* Not to worry: no weed or whites were abused on this drive—just a little wine to take the edge off after the the day’s drive after finding a motel that would take our golden retriever.

** Kipple is the accumulated junk of modern society and is best described by Philip K. Dick (one of my favorite SF writers) in Do Androids Dream of Electric Sheep?, which was made into the nearly perfect 1982 movie, Blade Runner. For cognoscenti of this film, Harrison Ford’s despairing Deckard is actually a replicant.

What to do When Research Indicates Your Approach is Unlikely to Succeed: Part I of a Case Study on the Community-Based Abstinence Education Program RFP

The Community Based Abstinence Education Program (CBAE) from the Administration on Children, Youth and Families is a complicated, confusing, and poorly designed RFP based on suspect premises. That makes it an excellent case study in how to deal with a variety of grant writing problems that relate to research, RFP construction, and your responses.

CBAE is simple: you’re supposed to provide abstinence and only abstinence education to teenagers. That means no talk about condoms and birth control being options. In some ways, CBAE is a counterpoint to the Title X Family Planning funding, which chiefly goes to safe-sex education and materials rather than abstinence education. Its premise is equally simple: if you’re going to have sex, use condoms and birth control. Congress chooses to fund both.

Were I more audacious regarding CBAE proposals, I’d have used George Orwell’s 1984 as a template for the programs, since almost everyone in the novel conforms to the numbing will of an all-powerful state and many belong to the “Junior Anti-Sex League,” complete with scarlet sashes. I hope someone turned in a CBAE application proposing scarlet sashes for all participants.

More on point, however, page two of the RFP says:

Pursuant to Section 510(b)(2) of Title V of the Social Security Act, the term “abstinence education,” for purposes of this program means an educational or motivational program that: […]

(B) Teaches abstinence from sexual activity outside marriage as the expected standard for all school age children

Who is enforcing this “expected standard?” Society in general? A particular person in society? But it gets better:

(D) Teaches that a mutually faithful monogamous relationship in the context of marriage is the expected standard of human sexual activity;

This requirement ignores decades of anthropological research into indigenous societies as well as plenty of research into our own society, which Mary Roach described in Bonk, Alfred Kinsey described using imperfect methods in his famous but flawed research in the 50’s, and that Foucault described in his History of Sexuality. It also ignores the sexuality of other cultures and even our own, as discussed in books like Conceiving Sexuality: Approaches to Sex Research in a Postmodern World, or, better yet, Culture, Society and Sexuality: A Reader, which describes the way societies and others build a social model of sex. Through the CBAE program, Congress is building one such model by asserting it is true and using “expected standard” language, without saying who is the “expecting” person or what is the “expecting” body. It’s an example of what Roger Shuy calls in Bureaucratic Language in Government and Business a term that “seems to be evasive,” as when insiders “use language to camouflage their message deliberately, particularly when trying to avoid saying something unpleasant or uncomfortable.” In this case, the evasion is the person upholding the supposed standard.

Furthermore, the abstinence conclusion isn’t well supported by the research that does exist, including research from previous years of the program, which is at best inconclusive. A Government Accountability Office report (warning: .pdf file) says things like, “While the extent to which federally funded abstinence-until-marriage education materials are inaccurate is not known, in the course of their reviews OPA [Office of Population Affairs] and some states reported that they have found inaccuracies in abstinence-until-marriage education materials. For example, one state official described an instance in which abstinence-until-marriage materials incorrectly suggested that HIV can pass through condoms because the latex used in condoms is porous.”

The one comprehensive study that has been conducted by a nonpartisan firm is called “Impacts of Four Title V, Section 510 Abstinence Education Programs” by Mathematica Public Research, which was spun off from the guys who brought us the Mathematica software. The study was prepared for DHHS itself, and it says such encouraging things as, “Findings indicate that youth in the program group were no more likely than control group youth to have abstained from sex and, among those who reported having had sex, they had similar numbers of sexual partners and had initiated sex at the same mean age.” The programs it studied are based around the same methods that the CBAE demands organizations use, all of which boil down to inculcating a culture of fear of sex outside of marriage. The social stigma the program recommends is based around STDs and whether you’ll get into college (although an editorial in the L.A. Times argues otherwise), and, to a lesser extent, altering peer norms. Still, even in Puritan times this was not entirely effective, as Bundling by Henry Stiles explains. The practice meant sleeping in the same bed with one’s clothes on, as a solution to the problems of inadequate heat and space. But, as Jacques Barzun says in From Dawn To Decadence: 500 Years of Western Cultural Life, “Experience showed the difficulty of restraint and […] the rule was made absolute that pregnancy after bundling imposed marriage […] So frequent was this occurrence that the church records repeatedly show the abbreviation FBM—fornication before marriage.”

There are counter-studies that purport to show abstinence education as effective, like this one from a crew that, not surprisingly, is selling abstinence education materials. But it, like most others, has little bon mots amid its intimidating numbers and verbose language like, “In addition, the high attrition rate limits our ability to generalize the findings to a higher-risk population” (strangely enough, the .pdf file is set to disallow copying and pasting, perhaps to discourage irate bloggers like myself). But the study doesn’t list the attrition rate, making it impossible to tell how severe the problem is. In addition, even if it did, the population selected might also suffer from cherry picking problems of various kinds: that is to say, organizations are more likely to serve the participants who are most likely to be receptive to services and, concomitantly, less likely to do things like have early sex. This is an easy and tempting way to make a program look good: only let the kids in who are likely to benefit. And it’s a hard problem to tease out in studies.

So be wary of dueling studies: if you don’t read these carefully, it’s easy to accept their validity, and even if you do read them carefully, it’s easy to nitpick. This is why peer review is so helpful in science and also part of the reason evaluations are so difficult. Furthermore, many of the studies, including Heritage’s, come from biased sources, a problem Megan McArdle writes about extensively in a non-abstinence-related context. (See her follow-up here). Most of you justifiably haven’t followed the blizzard of links I put up earlier or read the books I cited for good reason: who has the time to sift through all this stuff? No one, and even pseudoscience combined with anecdote like this article in New York Magazine has an opinion (hint: be wary of anyone whose title has the word “evolutionary” in it).

Given this research, which is hard to miss once you begin searching for information about the efficacy of abstinence instruction, how is a grant writer to create a logic model that, as page 44 says, should list “[a]ssumptions (e.g., beliefs about how the program will work and is supporting resources. Assumptions should be based on research, best practices, and experience)”? (emphasis added).

Two words: ignore research. And by “ignore research,” I mean any research that doesn’t support the assumptions underlying the RFP. If you want to be funded, you simply have to pretend “Impacts of Four Title V, Section 510 Abstinence Education Programs” or the GAO study don’t exist, and your proposal should be consistent with what the RFP claims, even if it’s wrong. This is, I suspect, one of the hardest things for novice grant writers to accept, which is that you’re not trying to be right in the sense of the scientific method of discerning the natural world through experimentation. You’re trying to be right in the Willie Stark sense of playing the game for the money. No matter how tempting it is to cite accurate research that contradicts the program, don’t, unless it’s to knock the research.

Remember too that the grant writer is to some extent also a mythmaker, which is a subject Isaac will address more fully in a future post. The vital thing to consider is that the mythology you need to create isn’t always the same as the reality on the ground. As in politics, the way events are portrayed are often different than how they actually are. David Broder wrote an article on the subject of inventing political narratives, which occasionally match reality; your job as a grant writer is inventing grant narratives. We hope these match reality more often than not. Sometimes the myth doesn’t, as in this application, and when that happens, you’re obligated to conform to the RFP’s mythology, even if it isn’t your own.

The second part of this post continues here.

Links: Finnish kids, computers in schools, bureaucrats, race, Playboy (?), and more!

* The Wall Street Journal ran “What Makes Finnish Kids So Smart? Finland’s teens score extraordinarily high on an international test. American educators are trying to figure out why” (the article is accessible for subscribers only). Part of the answer may include a culture that values reading, but the article also says:

Finnish high-school senior Elina Lamponen saw the differences [between the U.S. and Finland] firsthand. She spent a year at Colon High School in Colon, Mich., where strict rules didn’t translate into tougher lessons or dedicated students, Ms. Lamponen says. She would ask students whether they did their homework. They would reply: ” ‘Nah. So what’d you do last night?'” she recalls. History tests were often multiple choice. The rare essay question, she says, allowed very little space in which to write. In-class projects were largely “glue this to the poster for an hour,” she says.

In other words, the numerous rules imposed by U.S. schools might not actually help educational attainment.

* High school evaluation news continues, with a paper from the Urban Institute saying that Teach For America teachers are more effective than the regular ones in the same schools.

* Years ago, there were a variety of federal and state programs designed to get computers into schools. We wrote countless proposals for just that purpose, though my experience in public schools was that computers were almost always poorly used at the time—they didn’t help me learn anything about reading, writing, or math, but they were great for Oregon trail. Now researchers have found that, based on a Romanian program in which households received vouchers for computers:

Children in household that won a voucher also report having lower school grades and lower educational aspirations. There is also suggestive evidence that winning a voucher
is associated with negative behavior outcomes.

(Hat tip Slate.com).

* A concrete example of the kind of citation that can help get programs funded. But I’m not moving to Needles if I can avoid it. Which moves us right into…

* Megan McArdle’s an excellent post on the topic of federal assistance to depressed rural areas. I’ve read elsewhere in The Atlantic that urban and rural areas are essentially subsidized by the suburbs through various forms of tax redistribution, which should be at least somewhat apparent to longtime newsletter subscribers who see the numerous grant programs targeted at rural and urban areas but virtually none targeting suburbs.

* McArdle is so good that I’m linking to her twice. Regarding bureaucrats, she says:

Having a ridiculous reaction to something is not the fault of the person who did it–even if that person is a terrorist attempting horrific acts. I don’t mind removing my shoes, particularly–indeed, my parents will testify that they had quite a problem teaching me to keep them on. I achieve minor renown in college for walking around Philadelphia barefoot all summer. But the act of moving in compliant herds through the TSA lines, mindlessly adhering to the most ridiculous procedures the government can think up, contributes to making us what Joseph Schumpeter called “state broken”. Citizens should not acquire the habit of following orders with no good reason behind them.

After flying entirely too often in the last few months, I’ve come to loathe the TSA bureaucrats and the herd mentality in airports. Similar principles are at work regarding FEMA and Grants.gov.

* In other news about incompetent bureaucracies, check out this from the Washington Post.

* Whether you want to take race into account in programs or not, you’re bound to be criticized. Get used to it.

* In the “Who knew?” category, Playboy has a foundation and is accepting applications from a “Noteworthy advocate for the First Amendment.” I’m guessing they’re not shooting for those upholding the right to petition the government for redress of grievances. A quick quiz: the First Amendment actually has six components—can you name them all? (Answers in the second link).

Foundations and the Future: How Funder Incentives Affect Nonprofits, Grants, and Grant Writing

“New Voices Of Philanthropy”* is running an occasional series in which they invite bloggers involved in the nonprofit world to contribute; I’m tardy to this month’s question:

Will the Foundation of the Future only fund programs that benefit puppies and children? Will it be run by people that have attained the elusive PhD in Philanthropy? Will the Foundation of the Future actually be the donor advised fund of the future, since foundations are outlawed by Congress in 2016?

The short version: I think foundations in the future will be run much as they are in the present.

The longer version: most foundations seem to be run chiefly for the social prestige and well-being of the people running them. The primary evidence I can discern seems to fall into two categories, the first one stronger and more important than the other: foundations tend only to give away the minimum 5% of their assets every year, as required by law, and they tend to make the process through which nonprofits acquire funding unnecessarily arduous.

The New York Times has reported on a study done by a Barnard economics professor that found “[c]haritable foundations could give away 60 percent more money than they do now without eroding the total value of their assets” (emphasis added). His paper is available here (warning: .pdf link). The upshot is that foundations appear to hoarding money, and, as the study itself says:

[…] Congress intended to keep tax-favored foundations from becoming mere warehouses of wealth. To the extent that the foundation section operates as if though it were a non-endowment system, paying out new giving while allowing existing assets to compound in perpetuity, the foundation sector is in danger of appearing to be exactly what Congress wanted to prevent […] To the extent that individual foundation reduce payout to the legal minimum simply in order to increase their assets under management, they defeat the real social purpose of their privileged tax status[…]

In a similar vein, Akash Deep and Peter Frumkin wrote a paper, “The Foundation Payout Puzzle,” and found that “the average payout rate of this sample of foundations over time, as the policy regime has shifted slightly from a flat 6 percent, to the greater of total investment income or 5 percent, to a flat 5 percent” from 1972 to 1996. This, they later conclude, is bad because a dollar spent today would probably be more effective than a dollar spent tomorrow, assuming that the needs being addressed are important to the recipients.

Since they’re scholars, they give a long and detailed discussion about why foundations don’t increase their payouts. Since I’m a blogger, I’ll be short, mean, and accurate: foundations are, like most organizations, chiefly invested in their own interests and thus would rather propagate themselves into the future; Saul Alinski has argued that the only thing that matters in community organizing is identifying the “self-interest” of the those you are trying to organize, and in this case all that matters is the foundations. If they were purely motivated by the public good—which would seem the primary argument against my argument about foundations—they would presumably raise their payout rates.

Is there any way to counteract this dynamic and thus implicitly change the way foundations operate? It seems improbable. Occasionally a foundation may take a principled stand in a fashion similar to the way Harvard recently used its endowment to cut tuition, but the paper by Deep and Frumkin argues that the situation is getting worse, not better. While foundations are only required to give away five percent of their assets every year, the average American stock market indices have increased by, on average, about 11% per year since World War II. There is apparently some pressure to increase the payout rate, but I don’t think this will actually happen.

I’m skeptical because foundations themselves are unlikely to reduce their power and longevity by increasing payouts en masse, and Congress is equally unlikely to do so because the very rich who donate to congressional campaigns would immediately get every Congressman to whom they (the rich people) ever gave money to on the phone and demand that the payout rule be changed back to 5%. Why? Because the very rich tend to be men, and their wives tend to be the ones sitting on nonprofit boards, running foundations, donating to museums, and what not.

The minute Congress tries to alter this arrangement, the wives of whoever endowed the foundation are going to rise up in arms until the status quo is resolved. Isaac pointed this out to me one time when I read in the newspaper that Congress was threatening to cut the National Endowment for the Arts (NEA) funding: he said it would never happen because of just the situation I described, and he was right. The NEA is particularly unlikely to suffer deep cuts because it represents a very small but highly visible part of the government, and besides, it’s only a small part of discretionary social spending, which is dwarfed by mandatory spending, interest, and defense. This, incidentally, is why Alan Greenspan has been running around and talking about why Medicare—not the war in Iraq, or interest, or any number of other things—is the biggest long-term budget problem facing the U.S.

That was a long enough tangent, and the main point remains that since the same people who tend to fund foundations are also the ones who fund Congressional campaigns, it seems unlikely that Congress will tamper with foundations. So, foundations are unlikely to give more unless they want to. But the question of why do funders give remains.

Maimonides was a 12th century Rabbi who said there are eight levels of giving, with the top loosely being those who help anonymously and without expectation of reward and the bottom being those who give miserly or reluctantly and with the expectation of recognition. As you might have guessed, foundations tend to end up toward the bottom of Maimonides’ chain, meaning that they want to perpetuate themselves, put their names on things, and the like. This makes them highly unlikely to want to raise the payout rate and thus endanger their existence.

Now I’ll more fully discuss the second point: how difficult foundations make it to apply for money, as they seem uninterested in improving the grant making process for those requesting the grants. Questions are too often absurd and forms are poorly thought out (a great example of this will be discussed in a forthcoming post). In the fifteen years Seliger + Associates been in business, the number of times we’ve ever been called by funders asking how the process might be improved is zero.

Never. Not once. Proctor & Gamble, Microsoft, Boeing, and virtually every other large company or organization probably spends millions of dollars trying to figure out how to improve its products and services, but foundations do not appear to, or, if they do, they don’t ask the people who are involved in writing proposals. As a result, they raise the cost of acquiring funding and allow a proportionally lower amount to go to actual services, in a tangent phenomenon to what I discussed here.

Arguably, one could say that foundations make it difficult to receive money so the most interested and hence deserving nonprofits end up with funding. The application becomes a signaling device. There is some merit to the argument, but it also implies that foundations would cause nonprofits that are already successful not bother applying and simultaneously waste the time of foundations that do bother to apply by forcing them to play signaling games.

These perverse incentives coupled with the relative power of foundations compared with grant receivers, the vanity of being perceived as charitable, and the lack of discipline imposed on foundations will probably result in foundations of the future that look mostly like the ones of the present. Perhaps a few of them will buck the trend and spend significantly more than 5% per year, but this seems more likely to be the exception than the norm, especially after the initial funders die. After all, if you were running a foundation, would you be inclined to shut its doors and thus deprive yourself of management fees, free travel to study problems/applicant, or social prestige? Maybe you, the individual, would, but the plural you, who run foundations, wouldn’t.

And I haven’t even discussed how tax advantages work.

I don’t perceive much change in the foundation world, just as Isaac hasn’t seen much change in the overall world of grants in his 35 years of experience. In Charlie Wilson’s War (the movie version), Julia Roberts asks Tom Hanks why Congress says one thing and does another and he drolly replies, “Well, tradition, mostly.” The same could be said of the U.S. nonprofit world, and in 30 years I bet the problems and perils of foundation giving and many other aspects of grant writing will be the same they are today.


* As of 2018, it appears that Seliger + Associates has outlasted “New Voices of Philanthropy,” as the URL that used to be here deadends into a spam site. I guess sometimes the old voices endure longer than the new ones.

Self-Esteem—What is it good for? Absolutely Nothing

Roberta Stevens commented on “Writing Needs Assessments: How to Make it Seem Like the End of the World” by saying she was “having trouble finding statistics on low self esteem in girls ages 12-19.” This got me thinking about the pointlessness of “self-esteem” as a metric in grant proposals. A simple Google search for ‘“self-esteem” girls studies reports’ yielded a boatload of studies, but if you look closely at them, it is apparent that most are based on “self-reports,” which is another way of saying that researchers asked the little darlings how they feel.

When my youngest son was in middle school, he was subjected to endless navel gazing surveys and routinely reported confidentially that he had carried machine guns to school, smoked crack regularly and started having sex at age seven. In short, he thought it was fun to tweak the authority figures and my guess is that many other young people do too when confronted by earnest researchers asking probing questions.

Although such studies often reveal somewhat dubious alleged gender differences based on self-esteem, I have yet to see any self-esteem data that correlated with meaningful outcomes for young people. Perhaps this is obvious, since self-esteem is such a poor indicator of anything in the real world, given that Stalin appears to have had plenty of self-esteem, even if his moral compass was off target. Arguably our best President, Abraham Lincoln, was by most accounts wracked with self-doubt and low self-esteem, while more recent Presidents, Lyndon Johnson and Richard Nixon, both with questionable presidencies, did not seem short in the self-esteem department.

If I use self-esteem in a needs assessment for a supportive service program for teenage girls, I would find appropriately disturbing statistics (e.g., the pregnancy rate is two times the state rate, the drop out rate among teenage girls has increased by 20%, etc.) and “expert” quotes (“we’ve seen a rise in suicide ideation among our young women clients,” says Carmella, Kumquat, MSW, Mental Health Services Director) to paint a suitably depressing picture and then top it off with the ever popular statement such as, “Given these disappointing indicators, the organization knows anecdotally from its 200 years of experience in delivering youth services, that targeted young women exhibit extremely low self-esteem, which contributes to their challenges in achieving long-term self-sufficiency.” I know this is a nauseating sentence, but it is fairly typical of most grant proposals and is why proposals should never be read just after eating lunch.

So, to paraphrase Edwin Star, “Self-esteem, what is it good for? / Absolutely nothing.”

(In the context of gangs, Jake has also commented on suspect or twisted needs indicators .)


EDIT: A more recent post, Self-Efficacy—Oops, There Goes Another Rubber Tree Plant, takes up the issue of finding a metric more valuable than self-esteem for both grant writers and program participants.