Category Archives: How-to

Another piece of the evaluation puzzle: Why do experiments make people unhappy?

The more time you spend around grants, grant writing, nonprofits, public agencies, and funders, the more apparent it becomes that the “evaluation” section of most proposals is only barely separate in genre from mythology and folktales, yet most grant RFPs include requests for evaluations that are, if not outright bogus, then at least improbable—they’re not going to happen in the real world. We’ve written quite a bit on this subject, for two reasons: one is my own intellectual curiosity, but the second is for clients who worry that funders want a real-deal, full-on, intellectually and epistemologically rigorous evaluation (hint: they don’t).

That’s the wind-up to “Why Do Experiments Make People Uneasy?“, Alex Tabarrok’s post on a paper about how “Meyer et al. show in a series of 16 tests that unease with experiments is replicable and general.” Tabarrok calls the paper “important and sad,” and I agree, but the paper also reveals an important (and previously implicit) point about evaluation proposal sections for nonprofit and public agencies: funders don’t care about real evaluations because a real evaluation will probably make the applicant, the funder, and the general public uneasy. Not only do they make people uneasy, but most people don’t even understand how a real evaluation works in a human-services organization, how to collect data, what a randomized controlled trial is, and so on.

There’s an analogous situation in medicine; I’ve spent a lot of time around doctors who are friends, and I’d love to tell some specific stories,* but I’ll say that while everyone is nominally in favor of “evidence-based medicine” as an abstract idea, most of those who superficially favor it don’t really understand what it means, how to do it, or how to make major changes based on evidence. It’s often an empty buzzword, like “best practices” or “patient-centered care.”

In many nonprofit and public agencies, evaluations and effectiveness are the same: everyone putatively believes in them, but almost no one understands them or wants real evaluations conducted. Plus, beyond that epistemic problem, even if evaluations are effective in a given circumstance (they’re usually not), they don’t necessarily transfer. If you’re curious about why, Experimental Conversations: Perspectives on Randomized Trials in Development Economics is a good place to start—and this is the book least likely to be read, out of all the books I’ve ever recommended here. Normal people like reading 50 Shades of Grey and The Name of the Rose, not Experimental Conversations.

In the meantime, some funders have gotten word about RCTs. For example, the Department of Justice’s (DOJ) Bureau of Justice Assistance’s (BJA) Second Chance Act RFPs have bonus points in them for RCTs. I’ll be astounded if more than a handful of applicants even attempt a real RCT—for one thing, there’s not enough money available to conduct a rigorous RCT, which typically requires paying the control group to follow up for long-term tracking. Whoever put the RCT in this RFP probably wasn’t thinking about that real-world issue.

It’s easy to imagine a world in which donors and funders demand real, true, and rigorous evaluations. But they don’t. Donors mostly want to feel warm fuzzies and the status that comes from being fawned over—and I approve those things too, by the way, as they make the world go round. Government funders mostly want to make congress feel good, while cultivating an aura of sanctity and kindness. The number of funders who will make nonprofit funding contingent on true evaluations is small, and the number willing to pay for true evaluations is smaller still. And that’s why we get the system we get. The mistake some nonprofits make is thinking that the evaluation sections of proposals are for real. They’re not. They’re almost pure proposal world.


* The stories are juicy and also not flattering to some of the residency and department heads involved.

USDA Community Connect program: Technological change and bringing broadband to rural America

The USDA just released the Community Connect Grant program RFP, which has $30 million to fund 15 projects that will provide broadband in underserved rural communities. We’ve written a bunch of proposals related to rural Internet access, most during the heyday of the Stimulus Bill around 2010. Almost all of those projects involved, on some level, either digging a trench or stringing a wire. Both activities are very, very expensive, so not that many people can be served.

Google has discovered as much, albeit in urban areas: the company famously launched an effort to roll out gigabit fiber Internet about eight years ago, but relentless and ferocious legal and regulatory pressure from incumbents has led the company to scale back its plans. The combination of regulatory capture from other Internet providers and the inherent cost of digging and stringing defeated even Google.

But, at the same time, Google has also announced plans to offer wireless gigabit services in some cities, by placing antennas on the roofs of multifamily buildings and using an antenna-to-antenna system to bypass the digging-or-stringing-a-wire problem.

By now, you can probably see where I’m going. In the old world—like, the world of ten years ago—Community Connect-style programs only really worked with wires. But today, wired hubs combined with radios or lasers may allow projects to deploy broadband to far more locations with far less funding. I can’t speak to the technical feasibility of such projects (though we often write scientific and technical grants). But it doesn’t take an electrical engineering degree to know that “costs less” and “provides more” is a winning argument. I think that smart rural utilities will be looking into wireless systems for last-mile connections. The technology, it would appear, is here; it wasn’t in 2010. As you can likely tell from the title of this post, grant writers who can argue that the technology is here should be able to demonstrate cost benefits over fully wired systems.

We may also be in an interregnum period: While SpaceX has proposed low-latency satellite Internet, that technology is in the prototype stage and is not here yet. Ten years from now, low-orbit satellites may provide latency times as low as 25ms.

Overall, technological change should drive a change in the way Community Connect proposals are written. Many human-service grant programs change very little over time; eight or nine years ago, we began mentioning social media in proposals, but for the most part human service programs have the same fundamental structure: an organization gets some people with problems to come to a facility to receive some services, or the organization sends some expert workers to people with problems to receive some services. Even today, however, most human services nonprofits don’t make much use of social media in service delivery, although it is often used for volunteer recruitment, donations, etc. Technical grant proposals like those being written for Community Connect, though, can and should be driven by technical change.

The HRSA Uniform Data Source (UDS) Mapper: A complement to Census data

By now you’re familiar with writing needs assessments and you’re familiar with using Census data in the needs assessment. While Census data is useful for economic, language, and many other socioeconomic indicators, it’s not very useful for most health surveillance data—and most health-related data is hard to get. This is because it’s collected in weird ways, by county or state entities, and often compiled into reports for health districts and other non-standard sub-geographies that don’t match up with census tracks or even municipal boundaries. The collection and reporting mess often makes it to compare various areas. Enter HRSA’s Uniform Data Source (USD) Mapper tool.

I don’t know the specifics about the UDS Mapper’s genesis, but I’ll guess that HRSA got tired of receiving proposals that used a hodgepodge of non-comparable data sources derived from a byzantine collection of sources, some likely reliable and some likely less than reliable. To take one example we’re intimately familiar with, the five Service Planning Areas (SPAs) for which LA Country aggregates most data. If you’ve written proposals to LA City or LA County, you’ve likely encountered SPA data. While SPA data is very useful, it doesn’t contain much, if any, health care data. Healthcare data is largely maintained by the LA County Health Department and doesn’t correspond to SPAs, leaving applicants frustrated.

(As an aside, school data is yet another wrinkle in this, since it’s usually collected by school or by district, and those sources usually don’t match up with census tracks or political sub-divisions. There’s also Kids Count data, but that is usually sorted by state or county—not that helpful for a target area in the huge LA County with a population of 10 million.)

The UDS Mapper combines Census data with reports from Section 330 providers, then sorts that information by zip code, city, county, and state levels. It’s not perfect and should probably not be your only data source. But it’s a surprisingly rich and robust data source that most non-FQHCs don’t yet know about.

Everyone knows about Census data. Most know about Google Scholar, which can be used to improve the citations and scholarly framework of your proposal (and this is a grant proposal, so no one checks the cites, but they do notice if they’re there or not). HRSA hasn’t done much to promote UDS data outside the cloistered confines of FQHCs. So we’re doing our part to make sure you know about the new data goldmine.

How to write grant proposal work plans

In addition to the ever-present requirement for a project narrative, some RFPs require a “work plan.” For many novice grant writers, confronting the work plan raises a sense of dread similar to having to prepare a logic model. Unlike logic models, which involve a one-page diagram that displays project elements in a faux flow-chart format, work plans are usually structured as multi-column tables, like the simple illustration in this PDF (or try here for the Word version).

As the attached file shows, the work plan usually contains a blank for goals, with blanks for objectives under each goal and activities for each objective. Other columns may include timeframes, responsibilities, deliverables, data to be collected, and so on.

While it’s possible to create a 10- or even 20-page work plan (the work plan is usually not not counted against the project narrative page limit), there’s little reason to do so, unless you’re required to by the RFP. Instead, one overarching goal statement is generally enough. A goal statement might be, for example:

The project goal is to improve employment and life outcomes for formerly incarcerated cyclops by providing a range of culturally and linguistically appropriate wraparound supportive services.

Use that goal to develop three or four specific and measurable objectives, along with three or four activities for each objective. This will result in a work plan ranging from one to five pages. Each additional goal will (probably pointlessly) increase the page count and the chance to create continuity errors. A compact work plan will clearly summarize why and how the project will be implemented and it will be easy for readers/scorers to understand. That’s enough for a work plan.

It’s easy to introduce continuity errors between the workplan and narrative because goals, objectives, activities, timelines, etc., may be sprinkled throughout the narrative, budget, logic model, and/or forms, depending on the RFP requirements. Details in the work plan must be precisely consistent with all other proposal components. The more you edit each proposal draft, the less you will be able to spot internal inconsistencies within the narrative or between the narrative and the work plan. Inconsistencies will, however, stand out in neon to a reviewer reading the entire proposal for the first time.

We’re experienced grant writers, so we draft work plans after the second proposal draft is completed. But novice grant writers will find it useful to draft the work plan before writing the first draft, as this will help you organize the draft. Novices should also read differences among goals, objectives and activities before tackling the work plan.

Should your startup seek Small Business Innovation Research (SBIR) grants?

In response to Sam Altman’s great post “Hard tech is back,” someone on Hacker News pointed out that hard tech companies should apply for Small Business Innovation Small Business Innovation Research (SBIR) and Small Business Technology Transfer (STTR) grants (both programs provide funding to small companies that are commercializing research). There are excellent reasons to apply, which we’ll recapitulate: most Federal agencies are required to make SBIR/STTR funds available; grants for Phase I go up to $225,000, and Phase II grants go up to $2 million; a large amount of money is available (most years see SBIR/STTR budget allocations in the billions); unlike venture capital (VC) funding, federal money doesn’t require giving up shares in return for funding; and, finally, the feds may fund ideas VCs won’t. The “feds may fund ideas VCs won’t” is particularly but not exclusively true of hard tech projects.

So far, so good. But while the upsides are real, and we’re incentivized to emphasize them, the downsides are too. One is simple timing: if the appropriate SBIR / STTR funding cycle just concluded for that year, your startup may have to wait another year to apply.* Then another 2 – 3 months for a decision. Then longer for final budget approval and contract execution. A year is a very long time for a startup. The other day a potential client called whose best potential SBIR source had had a deadline the month prior.

Second, Phase 1 grants can just be too small for the amount of effort that goes into them.

Third, SBIRs/STTRs don’t come with the advice, community, or expertise of good VCs. Applicants may still get to meet some professors in their field or other experts, but those connections seem to be weaker than the connections good VCs generate.

Fourth, applications take a lot of effort to prepare, and for first-time grant writers they can be quite hard. The alternative is to hire someone like us. While I’m biased towards doing that for obvious reasons, we also cost money. I can’t say whether our fees are low, high, or just right—as discussed at the link, we get all three reactions—but our fees are real and no qualified grant writer will ever work for contingent fees.**

Just finding the appropriate SBIR/STTR program and RFP can be hard, since different Federal departments publish RFPs at different times and focus areas typically differ in each competition. Reading the RFP is hard for the uninitiated, for the same reason that reading legal documents is hard for the uninitiated. Most of us who don’t know Python would find Python source code hard to understand.

Fifth, I can’t think of any major companies that got started through SBIRs/STIRs. I did do some searching, and the NIH website gives us some examples, like Genzyme, MARTEX, Sonicare and Abbot Medical Optics. There must be others, and if you know of them I’d love to hear more. In contrast to SBIR/STTR-funded companies, the number of VC- and Y Combinator-backed startups is too long a list to bother reciting, especially since it includes almost every large tech company.

While I don’t want to talk anyone out of applying for a SBIR/STIR, I do want to emphasize that the downsides are considerable. For many if not most startups, applying to Y Combinator is going to be more efficient than seeking SBIRs/STTRs. Still, it’s possible to do both, and for some hard tech companies “both” may be a more interesting answer than either one on its own.

EDIT: A few readers (and some callers) are incredulous that we can write scientific and technical grants; this explains how we do it, as well as some of our strengths and limitations. We’re not experts in any scientific engineering, or technical disciplines, but we are very good at integrating material from a particular discipline for a particular project and we’re also good at asking questions, listening to the answers, and using those answers.


* We’re using the word “startup” as VCs and founders themselves use it—as a term that denotes a small company that plans to grow fast and become a big company, usually via technology and technological innovation / deployment. In this sense most small businesses are not startups: They’re restaurants and consulting practices and nail salons and so on.

** One time I talked to a Y Combinator-backed nonprofit that wanted us to work for contingent fees, and my contact person didn’t grok why grant writers won’t work that way.

The Ooma Office Business VoIP Phone System: Trials, Tribulations, Frustrations, Fiascoes, Success (sort of), Or, Our Review

UPDATED 11/11/15, GOOD NEWS RE THE FAX!

After two months of frustration, we’ve finally figured out how to get the Ooma Office VoIP system to successfully send and receive faxes. Here’s the hack, which works with a HP LaserJet Pro M521:

You must have a fax machine that allows users to change the fax or “baud” speed. Most newer fax machines default to the v.34 fast standard. Change this to v.29 slow. Next turn off ECM (error correction mode). Then connect the fax machine phone line directly to the Ooma desktop device, not a Linx wireless device. Voila, faxes work, albeit slowly. You’ll have make some effort to find the speed and ECM settings, which will be buried in your fax machine’s menus. In my case, the info is not in the project manual, but I found a 160 Trouble Shooting Guide for the M521 by googling, which explains how to do this. Our previous fax machine, which was about seven years old, a Xerox 4250 Workcentre, does not have controls for speed and ECM that can be changed by the user. My guess is that newer fax machine have these changeable settings, due to the increasing popularity of VoIP, which is not inherently compatible with the high speed fax protocol, but sometimes work with the slowest setting and ECM turned off.

The Ooma Office VoIP system works well for people in single offices who don’t need a fax machine. If you have more than one office and need a fax machine, Ooma Office may be a nightmare to set up, maintain, and get working consistently and properly (as it has been for us). Still, it does mostly work as of this writing, and we ended up teaching Ooma about a segment of their market that they didn’t know existed—so maybe they’ll improve over time.

About two months ago we decided to finally replace our fairly old, but very reliable, Avaya Partner Mail VS PBX POTS phone system with a VoIP system. Based on a very positive user survey from a large tech magazine, we picked Ooma Office.*

Ooma boxAlthough many of you will feel your eyelids get heavy around the time you finish this sentence, we’ll start by saying that replacing our Avaya landline phone system with Ooma Office turned out to not be one of our better equipment/vendor decisions. Several times during the setup process I screamed with total primal rage (not a good thing). Our tale likely won’t interest you unless you’re a) trying to pick a VoIP system for your small business, or b) starting a startup, in which case the company-client interaction dynamic should interest you greatly. We’ve written before about the “Small Business Blues: Trying to Get and Keep the Attention of Equipment Vendors is a Challenge.” This post is in its own way a continuation of that saga.

First, the good.

Ooma Office’s sound quality is high, albeit it after much struggle to find the right phones. In addition, the initial hardware costs are modest and our monthly phone bills are much lower than the old Verizon, landline-based Avaya system. A cautionary note is that the Ooma Office basic service (not including 800 number changes, other frills and taxes) is $10/line or extension, while telcos only change per line, often with unlimited long distance bundled. A complex Ooma system can easily get fairly expensive quickly compared to landlines.

The design of the Ooma Office desktop box is also excellent. So excellent that I have little to say about it apart from the fact that it could be made by Apple. The design of the wireless “Linx” devices that plug into wall outlets to extend the number of extensions, is similarly excellent, as is the Ooma Office Manager administrator web portal.

Ooma’s customer support is very good if you have a common problem that their front-line people can handle and is pretty good if you know how to work your way into the real support people found at “Level 3.” We’ve spent an incredible amount of time on the phone with Ooma’s tech support as we attempted to get our system working correctly.

To finish off “The Good,” Ooma has a fairly reliable iPhone app that allows an employee without an Ooma box in their office, or any employee on the go, to receive and make Ooma calls, without call forwarding. While the app is a little buggy, we view it like the dog playing the piano: It’s not that the dog plays well, it’s that he plays at all. In addition, software can be rapidly improved through updates, and we expect the app to get better over time.

The challenges.

Most of the online reviewers of Ooma Office have a single office, which might be home-based or not. If you have a single central office, with up to 20 employees/extensions for each Ooma box, Ooma Office should work well for you. Most online reviewers aren’t set up like Seliger + Associates: we have two offices, one in Santa Monica and one in New York City, as well as other staff who never come into either office. But we need a single system dispersed across two separated offices and roaming staff, so that anyone who calls any of our numbers can get any of us. Ooma Office doesn’t do that by default because of arcane telephony regulatory rules. It’s possible through dark arts to make this work by “merging” or remote linking of two or more Ooma boxes, but it’s not easy. It’s not possible for a user to set up more than one Ooma box, unless both boxes are in the same location, without a lot of Level 3 tech support.

Let’s talk too about the phone instrument issue. Most VoIP providers either sell compatible phones or provide a list of phones that have been tested with their system—RingCentral, for example, has a page with dedicated phones listed. For no apparent reason, Ooma does neither. Most VoIP systems also use modern IP phones, but Ooma Office is oddly incompatible with IP phones and instead only supports analog (or POTs) phones.

In a low moment after tech support struggles I sent this to Ooma’s support and to Ooma’s CEO (some cursing to follow, but hey, that was my mindset at the time; I like to think I’m moderately eloquent even when frustrated):

We’ve been trying to get an Ooma system set up properly, and the process has been, charitably speaking, a fucking nightmare. I’m sitting here and seething with rage and frustration at the latest problem.

We bought two generic random Panasonic landline phones to use with Ooma. They sound terrible. Consequently we’re trying to find phones that don’t sound like OEM equipment Alexander Graham Bell might have used. Ideally, that equipment should also have a 3.5mm headset port, but that is apparently impossible with this class of phone. Even a 2.5mm headset port would be an improvement.

Unfortunately, finding phones that aren’t terrible is itself like searching through a needle in a proverbial haystack. There are hundreds of phones, all of which appear to have been designed in 1980 and made for people who are more than willing to buy the $21.96 phone over the $22.23 phone because one is seven cents cheaper than the other. That is not us. We want phones that actually work. Trying to find phones that actually work has proven to be a gigantic hassle. At one point, many moons ago, Avaya was the standard. Or AT&T. Now there is no standard.

What I’d really like is a page on Ooma.com that says, “These handsets aren’t terrible.” Do you notice how, if you go to, say, Apple.com, you’ll only find stuff that actually works? That’s what I’d like. Digging through these fucking Amazon reviews for phones all of which appear superficially identical is making me nuts. The word “curated” has been debased by millions of bloggers and morons on Facebook, but it is nonetheless what I seek in this domain because I know nothing about the domain.

I called a support person who suggested I find something at Wal*Mart or Target. I live in Manhattan. This is not a helpful suggestion. You deal with phones every day. What I’d like is for someone to sort through the crap on the Internet, give us three or five good options, and then let us pick between them.

Let us consider Ring Central by comparison. There is a page, right here, that lists phones, none of which are (allegedly) shit. I could find a list of phones here, but only after much work. This shouldn’t be so hard. I can’t even find a support email address. At the moment I’m tearing out my hair and yelling at my computer in frustration. I don’t want to become a professional phone reviewer, buying and returning these things. I’m already a professional writer. One occupation is enough.

One page, with five good phones. That’s it. I can’t find it. Not on Ooma.com, not anywhere. Any ideas?

(A side note about companies and organizations: In medium and large companies the head of the organization often doesn’t fully know what’s going on at the feet of the organization. A CEO and other C-level people also only have so much attention. Sometimes politely and intelligently bringing a problem to the CEO’s attention is a way to get that problem fixed not only for the person sending the note but for everyone else who is having the problem.)

We know that Ooma is aware of the phone problem: conventional analog phones are stuck in the 1990s, when real companies and engineers were last interested in selling analog phones. Today is 2015 and the models still being sold are going to grandmas and legacy users and very occasionally to small business users like us. The people at Ooma are smart enough to realize this and smart enough to realize that they need to get their system working with IP phones or lose customers. IP phones are really just specialized computers, much as your iPhone is a specialized computer.

Analog phones, as I said previously, have not been of interest for a long time; one model we tried is so old that its default date is 2002! Think about the world of 2002 and the world of 2015 and you’ll quickly see the problem. There are no good modern analog phones. Zero. Zip. They don’t exist. Not anymore. All the R & D and product development today goes into IP phones. We did eventually find some Panasonic phones that aren’t offensive and that claim to support “HD Voice,” which is important because the increasing digitization of the phone system means that we’re moving towards a world with better audio quality.

Audio quality is more important to us than price because garbled or messed up words can cause us to lose important jobs. We’d rather spend more for quality than get the cheapest possible system.

Then there are fax issues. We heard an enormous amount of BS about faxes from Ooma support. The simple truth is that Ooma is not compatible with any fax machines. Virtually no VoIP systems are. This has to do the fax protocol itself, baud rates and other arcana. To use a physical fax machine, one needs a device called a Fax Bridge or ATA that converts the incoming and outgoing faxes to VoIP. Ooma Office does not support a Fax Bridge or ATA, so reliable and easy faxing remains an unsolved problem for us. Ooma finally gave us a free Virtual Fax extension, which is worth about what we pay for it. Like the Ooma app, the Virtual Fax software more or less works, but is very hard to use (I won’t bore you with the details).

Essentially, Ooma support told us to use their Virtual Fax, install land lines for our existing fax machines or buy a cloud-based fax solution from some other vendor. As of this writing, Ooma Office does not offer a reliable integrated fax solution. This is really hard to fathom as many small offices, like doctors and CPAs, still need faxes. Don’t even think about Ooma Office if you send or receive more than a couple of faxes a weeek.

Ooma and the modern tech world.

Working on the Ooma Office problems is a reminder of Apple’s tremendous influence over the last decade of change. I’m just old enough to remember portable music players before the iPod. They were terrible, and they were terrible in the exact same way the Panasonic phones we bought are terrible. They were designed by someone more like me—that is to say, with no design sense—rather than someone like Jonathan Ive (that linked article is great, and if you get lost reading it and don’t come back I wouldn’t blame you one bit).

The amazing thing about contemporary life is not how many products work incredibly well but how many work shockingly poorly, or, even more commonly, almost well. That “almost” is a key factor in frustration and is probably the driving force behind consolidated review sites like The Sweethome and The Wirecutter. Just figuring out what the good stuff is can be a full-time job. The Internet has in some ways made this better—everyone starts with a Google search—and in some ways made it worse—how authoritative is the person on the other side of that search? Among Amazon reviewers, the absolute worst products tend to get trashed, but almost every other product has a mix of positive and negative reviews. Crapware like analog phones are a great example of this.

Ooma Linx deviceOoma Office probably works well for people in a single office. For people like us, the system doesn’t quite work, and things that don’t quite work can be highly frustrating—especially when it’s obvious that Ooma has taken some cues from Apple and has done some things extraordinarily well (the wireless “Linx” extenders are an example of an elegant Ooma Office solution).

One of the most-read things I’ve written, ever, is a review of the modern Model M keyboard. It’s been so read in part, I think, because I a) know what I’m talking about, b) I know the problem domain well and exist in it every single day, and c) whatever my personal flaws may be, I can write a coherent sentence. Actually, I should also add “d)”, no one is paying me to write the review. I found a product so good that I had to write about why it’s so good and why it’s better than the sea of crap keyboards out there. Professional writers and programmers are not a large segment of the keyboard-using population but we are a segment that has particular needs that until recently weren’t being well met.

One way to read this piece is as a review of the Ooma Office system. A second, Straussian way is as an essay about the pervasive influence of Apple. There may be others.

I’m not the first person to wonder why phone quality still sounds like crap. The best quality I’ve heard is via Apple’s Facetime Audio feature, but that requires two people on iPhones (or other Apple devices) and for Facetime Audio to be specifically selected. Still, Jeff Hecht describes the larger issues in “Why Mobile Voice Quality Still Stinks—and How to Fix It: Technologies such as VoLTE and HD Voice could improve sound quality, but cellular carriers aren’t deploying them fast enough,” which I encourage you to read.

We’re not sure Ooma is going to last as a company. Ooma’s IPO appears to have failed (see also here). The company has a couple of serious problems: at the margins, many people who once would’ve bought dedicated phones are using cell phones. Archaic regulatory BS around legacy telephony means that Ooma can’t sell and configure distributed systems in a way that really makes sense. As noted above, the Ooma iPhone app is impressive in that it sort of works, but it’s not really how we want to use the system most of the time.

Perhaps the most obvious thing we’ve learned is that one should never buy a system like this if the vendor doesn’t sell all the parts. Ooma doesn’t sell any instruments. Avaya did. Ring Central does. That’s a key issue. Maybe Ring Central would’ve been no better than Ooma, and just as difficult to set up. We might yet find out.


* Don’t confuse Ooma Office with Ooma Telo, a low-end VoIP solution for the home-like Magic Jack.

Seliger’s Quick Guide to Developing Grant Proposal Staffing Plans

The staffing plan is usually one of the easier and shorter parts of the grant proposals. That’s because the project description will usually imply the staffing plan. For example, a project that conducts “outreach” or health navigation is going to consist of a director or supervisor or some sort, possibly an marketing specialist to create and execute ads, and some number of navigators or outreach specialists.* There may also be an administrative assistant or data clerk.

For healthcare projects run by a Federally Qualified Health Center (FQHC), though, a staffing plan might consist primarily of healthcare providers: two doctors, three physicians assistants, two nurses, a patient care assistant (PCA), a case manager, and front office staff.

It’s a almost always a good idea to propose a full-time Project Manager or Project Coordinator or Program Director or some similar position, unless the budget is too small. Many federal RFPs require a full-time manager or an explanation of why this is not considered necessary.

Most staffing plans follow a generic bulleted format in which a position is listed (like a Program Director), a percentage of time devoted to the project is listed (like 100%), a description is offered (like “Will oversee day-to-day activities”) and minimum qualifications established (like “must have at least three years of relevant management experience, along with at least a B.A. (M.A. preferred) in an appropriate field”). In many staffing plans for new programs, most staff members will be unknown, with the possible exception of the Program Manager. Consequently, you should put in a line that says something like, “Staff members will be hired following an open and fair recruitment process, in keeping with the organization’s Personnel Policy.”

That may not strictly be true—we’re well aware that many organizations have candidates in mind for particular positions—but it should be part of the proposal anyway. If it sounds like you’ve already hired and are already paying your potential staff members, you raise the dreaded specter of supplantation.

Most staff positions in most proposals should have three to four short sentences about what the staff person will do. In many cases, the explanation will be obvious even by the standards of doltish federal grant reviewers. For example, if you’re running an after school education program, you might have a listing like “Teachers (300%): At least three full-time, state-certified teachers will be hired to provide educational enhancement to at-risk youth. Teachers will cover reading, math, cuneiform, Python, and art, as noted in the project description. Teachers will have at least one year of relevant experience, as well as a bachelor’s degree in an appropriate field.” You can also say that all staff will have at least 20 hours of pre-service training, and in that training they will learn effective teaching techniques, why they should not sext with students, and how to handle disciplinary issues.

As noted above, shorter is generally better. Also, the staffing level should be large enough to cover the plausible range of activities but small enough to not go over the budget. For particularly small programs, like those with less than $100,000 per year, 1.5 or 2 Full-Time Equivalent (FTE) staff might be appropriate. In most social and human service programs, personnel costs will be largest cost category, since each warm body is going to cost a minimum of $30,000 per year, plus benefits, which can exceed 30% for some nonprofits and most public agencies. Each body adds up to lots of costs fast. A good rule of thumb is to assume about 80% of the budget for salaries and benefits, with 20% for everything else. There are exceptions—many job training proposals like YouthBuild include participant stipends, which can easily consume 25% of the budget—but relatively few proposed projects have this kind of large, non-personnel budget cost.

It’s also a good idea to have avoid oddball percentages in your staffing plan. Don’t say someone is going to have 17% or 4% of their time devoted to the project. Most positions are full-time (100%), half-time (50%), or, in rare cases, quarter-time (25%). If you have to calculate positions on an hourly basis, keep in mind that the federal standard is 2,080 person hours in a person year. Thus, a .5 FTE = 1,040 hours.

Evaluators and other consultants (e.g. social media consultants, curriculum consultants, etc.) can be listed in the staffing plan in the proposal narrative but generally are not included in the Personnel Object Cost Category in the budget, as they will usually be hired on an hourly basis or via a subcontract. In federal budgets, these are included in the Contractual or Other Object Cost Categories.

Finally, remember that in federal budgeting, the most important part of proposal staffing plans/budgets is that the proposed services discussed in the project description be plausible. Staffing plans and budgets don’t have to be exactly match reality and will likely change, as the grantee will have to negotiate a detailed budget after the notice grant award is received. Still, they have to meet the plausibility test that reviews will apply.


* Always check the acronyms for proposed positions. For example, it would not be a good idea to list Community Outreach Workers for a women’s health outreach project, as you would be proposing hiring COWs. Another unintentionally funny position name we see fairly often is Peer Outreach Workers (POWs).

Good needs assessments tell stories: Data is cheap and everyone has it

If you only include data in your needs assessment, you don’t stand out from dozens or hundreds of other needs assessments funders read for any given RFP competition. Good needs assessments tell stories: Data is cheap and everyone has it, and almost any data can be massaged to make a given target area look bad. Most people also don’t understand statistics, which makes it pretty easily to manipulate data. Even grant reviewers who do understand statistics rarely have the time to deeply evaluate the claims made in a given proposal.*

Man is The Storytelling Animal, to borrow the title of Jonathan Gottschall’s book. Few people dislike stories and many of those who dislike stories are not neurologically normal (Oliver Sacks writes movingly of such people in his memoir On the Move). The number of people who think primarily statistically and in data terms is small, and chances are they don’t read social and human service proposals. Your reviewer is likely among the vast majority of people who like stories, whether they want to like stories or not. You should cater in your proposal to the human taste for stories.

We’re grant writers, and we tell stories in proposals for the reasons articulated here and other posts. Nonetheless, a small number of clients—probably under 5%—don’t like this method (or don’t like our stories) and tell us to take out the binding narrative and just recite data. We advise against this, but we’re like lawyers in that we tell our clients what we think is best and then do what our clients tell us to do.

RFPs sometimes ask for specific data, and, if they do, you should obviously include that data. But if you have any room to tell a story, you should tell a story about the project area and target population. Each project area is different from any other project area in ways that “20% of the project area is under 200% of the Federal Poverty Line (FPL)” does not capture. A story about urban poverty is different from a story about recent immigration or a story about the plant closing in a rural area.

In addition, think about the reviewers’ job: they read proposal after proposal. Every proposal is likely to cite similar data indicating the proposed service area has problems. How is the reviewer supposed to decide that one area with a 25% poverty rate is more deserving than some other area with a 23% poverty rate?

Good writers will know how to weave data in story, but bad writers often don’t know they’re bad writers. A good writer will also make the needs assessment internally consistent with the rest of the proposal (we’ve written before “On the Importance of Internal Consistency in Grant Proposals“). Most people think taste is entirely subjective, for bad reasons that Paul Graham knocks down in this excellent essay. Knowing whether you’re a good writer is tough because you have to know good writing to know you’re a bad writer—which means that, paradoxically, bad writers are incapable of knowing they’re bad writers (as noted in the first sentence of this paragraph).

In everyday life, people generally counter stories with other stories, rather than data, and one way to lose friends and alienate people is to tell stories that move against the narrative that someone wants to present. That’s how powerful stories are. For example, “you” could point out that Americans commonly spend more money on pets than people in the bottom billion spend on themselves. If you hear someone contemplating or executing a four- or five-figure expenditure on a surgery for their dog or cat, ruminate on how many people across the world can’t afford any surgery. The number of people who will calmly think, “Gee, it’s telling that I value the life of an animal close at hand more than a human at some remove” is quite small relative to the people who say or think, “the person saying this to me is a jerk.”

As you might imagine, I have some firsthand investigative experience in matters from the preceding paragraph. Many people acquire pets for emotional closeness and to signal their kindness and caring to others. The latter motive is drastically undercut when people are consciously reminded that many humans don’t have the resources Americans pour into animals (consider a heartrending line from “The Long Road From Sudan to America:” “Tell me, what is the work of dogs in this country?”).

Perhaps comparing expenditures on dogs versus expenditures on humans is not precisely “thinking statistically,” but it is illustrative about the importance of stories and the danger of counter-stories that disrupt the stories we desperately want to tell about ourselves. Reviewers want stories. They read plenty of data, much of it dubiously sourced and contextualized, and you should give them data too. But data without context is like bread instead of a sandwich. Make the reviewer a sandwich. She’ll appreciate it, especially given the stale diet of bread that is most grant proposals.


* Some science and technical proposals are different, but this general point is true of social and human services.

The unsolvable standardized data problem and the needs assessment monster

Needs assessments tend to come in two flavors: one basically instructs the applicant to “Describe the target area and its needs,” and the applicant chooses whatever data it can come up with. For most applicants that’ll be some combination of Census data, local Consolidated Plan, data gathered by the applicant in the course of providing services, news stories and articles, and whatever else they can scavenge. Some areas have well-known local data sources; Los Angles County, for example, is divided into eight Service Planning Areas (SPAs), and the County and United Way provide most data relevant to grant writers by SPA.

The upside to this system is that applicants can use whatever data makes the service area look worse (looking worse is better because it indicates greater need). The downside is that funders will get a heterogeneous mix of data that frequently can’t be compared from proposal to proposal. And since no one has the time or energy to audit or check the data, applicants can easily fudge the numbers.

High school dropout rates are a great example of the vagaries in data work: definitions of what constitutes a high school dropout vary from district to district, and many districts have strong financial incentives to avoid calling any particular student a “dropout.” The GED situation in the U.S. makes dropout statistics even harder to understand and compare; if a student drops out at age 16 and gets a GED at 18 is he a dropout or a high school graduate? The mobility of many high-school age students makes it harder still, as does the advent of charter schools, on-line instruction and the decline of the neighborhood school in favor of open enrollment policies. There is no universal way to measure this seemingly simple number.*

The alternative to the “do whatever” system is for the funder to say: You must use System X in manner Y. The funder gives the applicant a specific source and says, “Use this source to calculate the relevant information.” For example, the last round of YouthBuild funding required the precise Census topic and table name for employment statistics. Every applicant had to use “S2301 EMPLOYMENT STATUS” and “S1701 POVERTY STATUS IN THE PAST 12 MONTHS,” per page 38 of the SGA.

The SGA writers forgot, however, that not every piece of Census data is available (or accurate) for every jurisdiction. Since I’ve done too much data work for too many places, I’ve become very familiar with the “(X)” in American Factfinder2 tables—which indicates that the requested data is not available.

In the case of YouthBuild, the SGA also specifies that dropout data must be gathered using a site called Edweek. But dropout data can’t really be standardized for the reasons that I only began to describe in the third paragraph of this post (I stopped to make sure that you don’t kill yourself from boredom, which would leave a gory mess for someone else to clean up). As local jurisdictions experiment with charter schools and online education, the data in sources like Edweek is only going to become more confusing—and less accurate.

If a YouthBuild proposal loses a few need points because of unavailable or unreliable data sources, or data sources that miss particular jurisdictions (as Edweek does) it probably won’t be funded, since an applicant needs almost a perfect score to get a YouthBuild grant. We should know, as we’ve written at least two dozen funded YouthBuild proposals over the years.

Standardized metrics from funders aren’t always good, and some people will get screwed if their projects don’t fit into a simple jurisdiction or if their jurisdiction doesn’t collect data in the same way as another jurisdiction.

As often happens at the juncture between the grant world and the real world, there isn’t an ideal way around this problem. From the perspective of funders, uniform data requirements give an illusion of fairness and equality. From the perspective of applicants trapped by particular reporting requirements, there may not be a good way to resolve the problem.

Applicants can try contacting the program officer, but that’s usually a waste of time: the program officer will just repeat the language of the RFP back to the applicant and tell the applicant to use its best judgment.

The optimal way to deal with the problem is probably to explain the situation in the proposal and offer alternative data. That might not work. Sometimes applicants just get screwed, and not in the way most people like to get screwed, and there’s little to be done about it.


* About 15 years ago, Isaac actually talked to the demographer who worked at the Department of Education on dropout data. This was in the pre-Internet days, and he just happened to get the guy who works on this stuff after multiple phone transfers. He explained why true, comprehensive dropout data is impossible to gather nationally, and some of his explanations have made it to this blog post.

No one ever talks to people who do stuff like this, and when they find an interested party they’re often eager to chat about the details of their work.

Don’t Trust Grants.Gov, Which Makes a $200,000,000 Mistake: An Example from the Teaching Health Center Graduate Medical Education (THCGME) Program

I’m preparing our weekly e-mail grant newsletter and see the Affordable Care Act – Teaching Health Center Graduate Medical Education (THCGME) Program, which, according to Grants.gov, has $20,250,000 available. Twenty million: that’s not bad but isn’t spectacular either. Good enough to include in the newsletter, especially since it appears that community health centers (CHCs) and organizations that partner with CHCs are good applicants.

Then I hunt down the RFP, which is located at an inconvenient, non-obvious spot.* The second page of the RFP says there is $230,000,000 available—about ten times as much as the Grants.gov listing. That’s a huge difference. So huge that I’m using bold, which we normally eschew because it’s primarily hacks who have to resort to typographical tricks to create impact. But in this case, the magnitude of the difference necessitates extreme measures.

If you see an RFP that looks interesting, always track down the source, even if the amount of money available or number of grants available doesn’t entice you. Don’t trust grants.gov. As with chatting up strangers in a bar, you never know what you’ll find when you look deeper.


* This is why subscribing to our newsletter is a good idea: I do this kind of tedious crap so you don’t have to.