Posted on Leave a comment

Permitting is the big barrier to wind energy right now (beyond batteries and fundamental research)

The Department of Energy’s (DOE) Office of Energy Efficiency & Renewable Energy (EERE) wrote and posted the 2022 Offshore Wind Market Report,* which contains an astounding table that in turn points to the way permitting—not research and development or manufacturing—is the big barrier to wind energy right now; although many of us assume that better batteries, and more fundamental research, are the main constraints for large-scale wind power deployment, it would appear that, no, permitting is actually the biggest barrier, which is blocking at least 442 times as much wind power as is presently operating:

wind energy table

So: 42 megawatts (MW) are operating, 932 MW are under construction, and 18,581 are stuck in permitting processes—many of those permitting processes likely relating to the National Environmental Protection Act (NEPA), which is supposed to “protect” the environment, but has instead been used as a cudgel to continue the status quo and prevent substantial changes in power and transportation policy. “The status quo” in terms of power production in the United States is pretty bad, and high in terms of methane and carbon emissions. Beyond the 18,581 MW in permitting purgatory, another 15,996 MW are in the “site control” phase of wind projects.

Before seeing this report, I knew that NEPA (and CEQA in CA), along with parochial local political issues, were stopping wind power from being developed, but I didn’t realize the extent to which permitting was blocking wind projects. At Seliger + Associates, we’ve got some personal and business interests in seeing renewable energy projects succeed, because we’ve written many proposals for organizations and companies that are working on renewable energy projects. Consequently, and above and beyond the obvious need for renewable energy, we don’t like to see our work wasted: the United States can do a tremendous amount of R & D, but if the fruits of the R & D can’t be deployed, the R & D is in effect wasted.

Currently, it’s already technically feasible to install large amounts of offshore wind power generating capacity: we’re just not doing it. And the EERE has, as of this writing, a Funding Opportunity Announcement on the street for “The Systems Integration Solar and Wind Grid Services and Reliability Demonstration FOA.” That FOA obviously isn’t for the direct creation of new solar technologies, but it’s an indication of the importance of wind-related, grant-funded projects. Total U.S. power generating capacity is vast—one source reports “1.2 million megawatts of generation capacity” in Feb. 2022—so even 18,000 MW is a small proportion. But permitting challenges likely dissuade would-be operators from attempting to install more. The Inflation Reduction Act (IRA), passed earlier this month, is supposed to reform the permitting process, which is a major culprit in U.S. reliance on fossil fuels. The sooner reform hits, the better. Offshore wind minimally affects birds, so it’s one of the easiest climate- and power-related wins available to us, and much of the U.S. population is clustered along the coasts.


* The link goes to a PDF download.

Posted on Leave a comment

“Currently, [Census] data is not loading properly:” DOL’s YouthBuild FY ’21

Needs assessment experts and data nerds know that factfinder.census.gov, the old primary portal into Census data, is dead, while the new census data portal, data.census.gov, is only somewhat alive. Last year, I started a post about the ways that data.census.gov is broken, but I abandoned it because it was too boring, even for me; last year, data.census.gov was hellaciously slow, often taking 10 seconds for a query (a needs assessment may require dozens or hundreds of queries), and many internal links simply didn’t work. Some of that seems to have been fixed: back then, for example, trying to find specific sub-data sets, like educational attainment, for a given zip code, didn’t work. I sent some feedback to the Census contact person, who was very helpful, and eventually most of the problems disappeared.

But not all, it seems; this year’s DOL YouthBuild NOFA includes a humorous instruction regarding data requirements: pages 84 – 86 offer a 20-step algorithm for acquiring poverty data. That the algorithm has 20 steps and three pages is obviously bizarre: instruction 17 notes, “A table will come up showing the Total Population, the Number in Poverty, and the Poverty Rate. Currently, the data is not loading properly and at first only the overall U.S. data will load and you will not be able to scroll any further to the right to see anything else.” Oh? “Currently, the data is not loading properly:” that seems as if it could be the theme of the new Census interface.

About 10 years ago, there was a popular link-sharing site called Digg, and it introduced a now-notorious redesign that users hated, and those users consequently abandoned it en masse, leading to the rise of Reddit, a now-popular link-sharing site. If Digg had been more careful, it probably would have maintained its previous site design for those who wanted it, while introducing its new site design as a default, but not mandatory, experience. And then Digg would likely have iterated on the new design, figuring out what works. Reddit has somewhat learned this lesson; it now has two interfaces, one primarily living at old.reddit.com, which is maintained for people highly familiar with “the old Reddit,” and a newer one that is available by default at reddit.com. This bifurcation strategy allows a smooth transition between interfaces. The Census didn’t follow this strategy, and instead killed the old interface before the new one was really ready. Thus, bugs, like the bugs I’ve noticed, and bugs like those the Dept. of Labor noticed and mentioned specifically in YouthBuild NOFA. The more general lesson is fairly clear: be wary of big user interface changes. If you need Census data, though, you’ll have to use the interface, as is, since it’s the only one available.

For some reason—perhaps latent masochism?—Isaac continues to use MS Office 365 Outlook (not the free version) as an email client, instead of Apple’s Mail.app, or Thunderbird, and he tells me that every time he opens Outlook, he gets an invitation to try “the new Outlook” interface. So far, he’s resisted, but he also points out that most change is positive: when S + A started in 1993, there was effectively no commercial Internet, and the only way to get Census data was to go to Census Office, if you were near a big enough city, city hall, or a large library, where it was possible to thumb through the impenetrable Census books and maps. After a year or two in business, some vendor got the idea of putting the 1990 Census data on CDs (remember those), for quite a high price. Even though S + A was struggling to control costs, he bought the CDs, since they were better than hours in a Census Office or library. But then he had to buy, and install, CD drives in the Pentium PCs (remember those) we used. A couple of years later, he stumbled into a Census data portal set up by a random university, which worked! So, he tossed the CDs. When the 2000 Census came out, the feds essentially copied the university’s interface, creating factfinder.gov, and all was well until data.census.gov came alone. It’ll probably be better than the old interface, at some point.

Complaining is easy and making things better is hard. In the Internet era, both complainers and makers have been empowered, and I appreciate the difference between the two. People who have fundamental responsibility for a product, service, or organization, including the responsibility for making hard decisions that aren’t going to be popular with everyone, have a different perspective than those who can just complain and move on. So I don’t want to be a drive-by complainer, as so many are on “social” media, which seems poisonous to institutional formation and coherence. But, despite those caveats, the instruction from DOL regarding the Census being broken is perversely funny.

Posted on Leave a comment

A description of scientific and technical grant writing, found in an unexpected place

I have rarely, if ever, seen an explicit, reasonably accurate description of the grant application process in a general nonfiction book—until this weekend, when I was reading Steve Levine’s book The Powerhouse: America, China, and the Great Battery War. It’s an excellent and highly recommended book in general, but it also has a passage detailing Argonne National Laboratory’s efforts to become a major battery research and development hub. Becoming an R&D hub required a response to a complex federal Funding Opportunity Announcement (FOA, which is another term for RFP). Levine characterizes the process this way:

The assessment [of Argonne’s proposal] lasted two days. Madia was harsh. Argonne’s vision—its “story”—did not shine through. The narrative was “buried deep in the science.” The scientific sections were adequate as far as they went, but the team’s priority was to craft the story so that a nonexpert—like members of the judging team who were not battery guys—could understand it. As the proposal stood, it failed to meet this standard.

More problematically, the proposal seemed actually to ignore some provisions in the FOA. The FOA had stipulated a serious commitment to applied science. Madia judged the appropriate balance at about 60 percent research and the rest development and deployment. The Argonne team had proposed an 80 percent emphasis on basic research—clearly too much.

He raised a couple of other points—there were too many “whats” and not enough “hows”; each time the proposal said the team intended to do something, it should provide an example of how it would be done. Madia was troubled. The previous summer, he had seen a preliminary draft and said much the same.

In the version of the story told by Levine, however, Madia doesn’t fix the proposal himself, which isn’t very helpful. Nonetheless, Grant Writing Confidential readers should notice many points we regularly make in posts like “Proposals need to tell stories.” And scientific and technical proposals are not exempt from this rule. In addition, he who has the gold, makes the rule. You must follow the FOA guidelines, or follow them as best you can.

The “Follow the FOA guidelines” rule has been much on our minds in the last several weeks because we’ve been working on Department of Education Early Head Start (EHS) applications, and the EHS RFP contains this instruction: “Applicants must prepare the project description statement in accordance with the following instructions while being aware of the specified evaluation criteria in Section V.1. Criteria.” In other words, the Dept. of Education put the mandatory headers in one section, starting on page 33 of the RFP, but also included other required material under the “Application Review Information” on page 54. The two sections don’t match, either. The evaluation criteria says, for example, “Evidence of community engagement in the proposed geographic locations that is designed to improve service delivery, increase access to services, and prevent duplication,” but the instructions on page 33 omit that. A would-be applicant who only attends to page 33 will miss the vital material on page 54.

The Argonne grant-writing team evidently faced a similar problem, as shown by the misplaced balance between basic and applied research. The cited difference between the “what” and the “how” is more interesting, though. Some technical proposals don’t have a lot of “how” in them because the proposer doesn’t know how the task will be achieved. If the proposer already understands how the task can be achieved, he sometimes doesn’t need the money—because the problem has already been solved. If we already know how to do something, it’s not research, it’s implementation.

Sometimes, though, the “how” is fairly well-known. A few months ago, we finished working on a technical proposal relating to alternative energy technology R & D project, and the Department of Energy has funded the application. The “How” on that application appeared to be fairly clear. I want to explain what made it clear, but I can’t really do so without giving away too much of the client’s domain expertise.

Overall, Levine’s Argonne story demonstrates why many people choose to specialize in specific fields, then hire experts in fields not their own. The scientific luminaries and visionaries at Argonne haven’t specialized in the grant writing process. They may be working on battery breakthroughs essential to the future of the world, but knowing how to conduct research and knowing how to explain the research to the rest of the world are different skills.

Posted on Leave a comment

First HRSA, Now DOL: Simpler Forms and Reasonable Templates in the FY ’16 YouthBuild FOA

A few weeks ago we noticed that “HRSA made it harder for NAP applicants to shoot themselves in the foot;” now it appears that DOL is getting in the game. In this year’s YouthBuild SGA, DOL includes a form called “WORKSHEET_weighted_average.xlsx,” which models what previous YouthBuild SGAs have only instructed applicants to do regarding unemployment rates. Years ago applicants could do pretty much whatever they wanted regarding unemployment rates, using any data sources, but over time DOL has gotten more and more specific, presumably so that they’re comparing homogeneous numbers.

Today, calculating weighted average unemployment rates isn’t hard, exactly, but we’d bet that DOL got all kinds of interesting, incompatible responses to these instructions, from the 2015 YouthBuild FOA:

The applicant must provide weighted average unemployment rate (rounded to one decimal place) of the combined cities or towns identified as part of the target community(ies) compared to the national unemployment rate as of the latest available comparable data. This data is broken into two youth age subsets: 16 – 19 and 20 – 24. Applicants will have to average the unemployment rate for these two age groups by adding the populations together and then dividing by the total population.

We know how to model this in Excel, but we shouldn’t have had to: DOL should’ve included a template long ago. Last year we wrote a post about how “Funders Could Provide Proposal Templates in Word,” and doing so would likely raise the quality of the average proposal submitted while simultaneously reducing the busy work of applicants. Funders aren’t incentivized to do this, save by the knowledge of what they’ll get if they don’t provide templates, and consequently they don’t.*

Still, there are downsides to the the DOL approach. Applicants must now collect and aggregate specific data points for all the zip codes they’re serving, rather than choosing a different geographical unit, like a city or county, that ordinary humans understand. Few people say, “I really love living in zip code 66666.” But they might say, “Austin is great!”

Those of us who’ve done data work on large numbers of zip codes know how irritating that can be. I’m thinking of a particular project I worked on a couple months ago that had dozens of zip codes in the target area, and I never could figure out how to really expedite the process via the Census’s powerful, yet maddeningly Byzantine, website. There was (and is) probably an efficient way of doing what I was doing, but I never figured it out. The Census website is hardly the first piece of software with fantastically sophisticated abilities that most users never learn because the learning curve itself is so steep.

Overall, though, the simple, included form in this year’s YouthBuild SGA will probably lead to better proposals. We’re a little sad to see it, though, because conforming to the form makes it harder for crafty grant writers like us to weave threads of cherry picked and obfuscated data into an elegant, but sometimes specious, needs assessment tapestry that is coin of our realm.


* Given the unstated role of signaling in proposals, which we write about at the link, funders might be incentivized to make the grant process harder, not easier.

Posted on 2 Comments

The Ooma Office Business VoIP Phone System: Trials, Tribulations, Frustrations, Fiascoes, Success (sort of), Or, Our Review

UPDATED 11/11/15, GOOD NEWS RE THE FAX!

After two months of frustration, we’ve finally figured out how to get the Ooma Office VoIP system to successfully send and receive faxes. Here’s the hack, which works with a HP LaserJet Pro M521:

You must have a fax machine that allows users to change the fax or “baud” speed. Most newer fax machines default to the v.34 fast standard. Change this to v.29 slow. Next turn off ECM (error correction mode). Then connect the fax machine phone line directly to the Ooma desktop device, not a Linx wireless device. Voila, faxes work, albeit slowly. You’ll have make some effort to find the speed and ECM settings, which will be buried in your fax machine’s menus. In my case, the info is not in the project manual, but I found a 160 Trouble Shooting Guide for the M521 by googling, which explains how to do this. Our previous fax machine, which was about seven years old, a Xerox 4250 Workcentre, does not have controls for speed and ECM that can be changed by the user. My guess is that newer fax machine have these changeable settings, due to the increasing popularity of VoIP, which is not inherently compatible with the high speed fax protocol, but sometimes work with the slowest setting and ECM turned off.

The Ooma Office VoIP system works well for people in single offices who don’t need a fax machine. If you have more than one office and need a fax machine, Ooma Office may be a nightmare to set up, maintain, and get working consistently and properly (as it has been for us). Still, it does mostly work as of this writing, and we ended up teaching Ooma about a segment of their market that they didn’t know existed—so maybe they’ll improve over time.

About two months ago we decided to finally replace our fairly old, but very reliable, Avaya Partner Mail VS PBX POTS phone system with a VoIP system. Based on a very positive user survey from a large tech magazine, we picked Ooma Office.*

Ooma boxAlthough many of you will feel your eyelids get heavy around the time you finish this sentence, we’ll start by saying that replacing our Avaya landline phone system with Ooma Office turned out to not be one of our better equipment/vendor decisions. Several times during the setup process I screamed with total primal rage (not a good thing). Our tale likely won’t interest you unless you’re a) trying to pick a VoIP system for your small business, or b) starting a startup, in which case the company-client interaction dynamic should interest you greatly. We’ve written before about the “Small Business Blues: Trying to Get and Keep the Attention of Equipment Vendors is a Challenge.” This post is in its own way a continuation of that saga.

First, the good.

Ooma Office’s sound quality is high, albeit it after much struggle to find the right phones. In addition, the initial hardware costs are modest and our monthly phone bills are much lower than the old Verizon, landline-based Avaya system. A cautionary note is that the Ooma Office basic service (not including 800 number changes, other frills and taxes) is $10/line or extension, while telcos only change per line, often with unlimited long distance bundled. A complex Ooma system can easily get fairly expensive quickly compared to landlines.

The design of the Ooma Office desktop box is also excellent. So excellent that I have little to say about it apart from the fact that it could be made by Apple. The design of the wireless “Linx” devices that plug into wall outlets to extend the number of extensions, is similarly excellent, as is the Ooma Office Manager administrator web portal.

Ooma’s customer support is very good if you have a common problem that their front-line people can handle and is pretty good if you know how to work your way into the real support people found at “Level 3.” We’ve spent an incredible amount of time on the phone with Ooma’s tech support as we attempted to get our system working correctly.

To finish off “The Good,” Ooma has a fairly reliable iPhone app that allows an employee without an Ooma box in their office, or any employee on the go, to receive and make Ooma calls, without call forwarding. While the app is a little buggy, we view it like the dog playing the piano: It’s not that the dog plays well, it’s that he plays at all. In addition, software can be rapidly improved through updates, and we expect the app to get better over time.

The challenges.

Most of the online reviewers of Ooma Office have a single office, which might be home-based or not. If you have a single central office, with up to 20 employees/extensions for each Ooma box, Ooma Office should work well for you. Most online reviewers aren’t set up like Seliger + Associates: we have two offices, one in Santa Monica and one in New York City, as well as other staff who never come into either office. But we need a single system dispersed across two separated offices and roaming staff, so that anyone who calls any of our numbers can get any of us. Ooma Office doesn’t do that by default because of arcane telephony regulatory rules. It’s possible through dark arts to make this work by “merging” or remote linking of two or more Ooma boxes, but it’s not easy. It’s not possible for a user to set up more than one Ooma box, unless both boxes are in the same location, without a lot of Level 3 tech support.

Let’s talk too about the phone instrument issue. Most VoIP providers either sell compatible phones or provide a list of phones that have been tested with their system—RingCentral, for example, has a page with dedicated phones listed. For no apparent reason, Ooma does neither. Most VoIP systems also use modern IP phones, but Ooma Office is oddly incompatible with IP phones and instead only supports analog (or POTs) phones.

In a low moment after tech support struggles I sent this to Ooma’s support and to Ooma’s CEO (some cursing to follow, but hey, that was my mindset at the time; I like to think I’m moderately eloquent even when frustrated):

We’ve been trying to get an Ooma system set up properly, and the process has been, charitably speaking, a fucking nightmare. I’m sitting here and seething with rage and frustration at the latest problem.

We bought two generic random Panasonic landline phones to use with Ooma. They sound terrible. Consequently we’re trying to find phones that don’t sound like OEM equipment Alexander Graham Bell might have used. Ideally, that equipment should also have a 3.5mm headset port, but that is apparently impossible with this class of phone. Even a 2.5mm headset port would be an improvement.

Unfortunately, finding phones that aren’t terrible is itself like searching through a needle in a proverbial haystack. There are hundreds of phones, all of which appear to have been designed in 1980 and made for people who are more than willing to buy the $21.96 phone over the $22.23 phone because one is seven cents cheaper than the other. That is not us. We want phones that actually work. Trying to find phones that actually work has proven to be a gigantic hassle. At one point, many moons ago, Avaya was the standard. Or AT&T. Now there is no standard.

What I’d really like is a page on Ooma.com that says, “These handsets aren’t terrible.” Do you notice how, if you go to, say, Apple.com, you’ll only find stuff that actually works? That’s what I’d like. Digging through these fucking Amazon reviews for phones all of which appear superficially identical is making me nuts. The word “curated” has been debased by millions of bloggers and morons on Facebook, but it is nonetheless what I seek in this domain because I know nothing about the domain.

I called a support person who suggested I find something at Wal*Mart or Target. I live in Manhattan. This is not a helpful suggestion. You deal with phones every day. What I’d like is for someone to sort through the crap on the Internet, give us three or five good options, and then let us pick between them.

Let us consider Ring Central by comparison. There is a page, right here, that lists phones, none of which are (allegedly) shit. I could find a list of phones here, but only after much work. This shouldn’t be so hard. I can’t even find a support email address. At the moment I’m tearing out my hair and yelling at my computer in frustration. I don’t want to become a professional phone reviewer, buying and returning these things. I’m already a professional writer. One occupation is enough.

One page, with five good phones. That’s it. I can’t find it. Not on Ooma.com, not anywhere. Any ideas?

(A side note about companies and organizations: In medium and large companies the head of the organization often doesn’t fully know what’s going on at the feet of the organization. A CEO and other C-level people also only have so much attention. Sometimes politely and intelligently bringing a problem to the CEO’s attention is a way to get that problem fixed not only for the person sending the note but for everyone else who is having the problem.)

We know that Ooma is aware of the phone problem: conventional analog phones are stuck in the 1990s, when real companies and engineers were last interested in selling analog phones. Today is 2015 and the models still being sold are going to grandmas and legacy users and very occasionally to small business users like us. The people at Ooma are smart enough to realize this and smart enough to realize that they need to get their system working with IP phones or lose customers. IP phones are really just specialized computers, much as your iPhone is a specialized computer.

Analog phones, as I said previously, have not been of interest for a long time; one model we tried is so old that its default date is 2002! Think about the world of 2002 and the world of 2015 and you’ll quickly see the problem. There are no good modern analog phones. Zero. Zip. They don’t exist. Not anymore. All the R & D and product development today goes into IP phones. We did eventually find some Panasonic phones that aren’t offensive and that claim to support “HD Voice,” which is important because the increasing digitization of the phone system means that we’re moving towards a world with better audio quality.

Audio quality is more important to us than price because garbled or messed up words can cause us to lose important jobs. We’d rather spend more for quality than get the cheapest possible system.

Then there are fax issues. We heard an enormous amount of BS about faxes from Ooma support. The simple truth is that Ooma is not compatible with any fax machines. Virtually no VoIP systems are. This has to do the fax protocol itself, baud rates and other arcana. To use a physical fax machine, one needs a device called a Fax Bridge or ATA that converts the incoming and outgoing faxes to VoIP. Ooma Office does not support a Fax Bridge or ATA, so reliable and easy faxing remains an unsolved problem for us. Ooma finally gave us a free Virtual Fax extension, which is worth about what we pay for it. Like the Ooma app, the Virtual Fax software more or less works, but is very hard to use (I won’t bore you with the details).

Essentially, Ooma support told us to use their Virtual Fax, install land lines for our existing fax machines or buy a cloud-based fax solution from some other vendor. As of this writing, Ooma Office does not offer a reliable integrated fax solution. This is really hard to fathom as many small offices, like doctors and CPAs, still need faxes. Don’t even think about Ooma Office if you send or receive more than a couple of faxes a weeek.

Ooma and the modern tech world.

Working on the Ooma Office problems is a reminder of Apple’s tremendous influence over the last decade of change. I’m just old enough to remember portable music players before the iPod. They were terrible, and they were terrible in the exact same way the Panasonic phones we bought are terrible. They were designed by someone more like me—that is to say, with no design sense—rather than someone like Jonathan Ive (that linked article is great, and if you get lost reading it and don’t come back I wouldn’t blame you one bit).

The amazing thing about contemporary life is not how many products work incredibly well but how many work shockingly poorly, or, even more commonly, almost well. That “almost” is a key factor in frustration and is probably the driving force behind consolidated review sites like The Sweethome and The Wirecutter. Just figuring out what the good stuff is can be a full-time job. The Internet has in some ways made this better—everyone starts with a Google search—and in some ways made it worse—how authoritative is the person on the other side of that search? Among Amazon reviewers, the absolute worst products tend to get trashed, but almost every other product has a mix of positive and negative reviews. Crapware like analog phones are a great example of this.

Ooma Linx deviceOoma Office probably works well for people in a single office. For people like us, the system doesn’t quite work, and things that don’t quite work can be highly frustrating—especially when it’s obvious that Ooma has taken some cues from Apple and has done some things extraordinarily well (the wireless “Linx” extenders are an example of an elegant Ooma Office solution).

One of the most-read things I’ve written, ever, is a review of the modern Model M keyboard. It’s been so read in part, I think, because I a) know what I’m talking about, b) I know the problem domain well and exist in it every single day, and c) whatever my personal flaws may be, I can write a coherent sentence. Actually, I should also add “d)”, no one is paying me to write the review. I found a product so good that I had to write about why it’s so good and why it’s better than the sea of crap keyboards out there. Professional writers and programmers are not a large segment of the keyboard-using population but we are a segment that has particular needs that until recently weren’t being well met.

One way to read this piece is as a review of the Ooma Office system. A second, Straussian way is as an essay about the pervasive influence of Apple. There may be others.

I’m not the first person to wonder why phone quality still sounds like crap. The best quality I’ve heard is via Apple’s Facetime Audio feature, but that requires two people on iPhones (or other Apple devices) and for Facetime Audio to be specifically selected. Still, Jeff Hecht describes the larger issues in “Why Mobile Voice Quality Still Stinks—and How to Fix It: Technologies such as VoLTE and HD Voice could improve sound quality, but cellular carriers aren’t deploying them fast enough,” which I encourage you to read.

We’re not sure Ooma is going to last as a company. Ooma’s IPO appears to have failed (see also here). The company has a couple of serious problems: at the margins, many people who once would’ve bought dedicated phones are using cell phones. Archaic regulatory BS around legacy telephony means that Ooma can’t sell and configure distributed systems in a way that really makes sense. As noted above, the Ooma iPhone app is impressive in that it sort of works, but it’s not really how we want to use the system most of the time.

Perhaps the most obvious thing we’ve learned is that one should never buy a system like this if the vendor doesn’t sell all the parts. Ooma doesn’t sell any instruments. Avaya did. Ring Central does. That’s a key issue. Maybe Ring Central would’ve been no better than Ooma, and just as difficult to set up. We might yet find out.


* Don’t confuse Ooma Office with Ooma Telo, a low-end VoIP solution for the home-like Magic Jack.

Posted on 2 Comments

Don’t Trust Grants.Gov, Which Makes a $200,000,000 Mistake: An Example from the Teaching Health Center Graduate Medical Education (THCGME) Program

I’m preparing our weekly e-mail grant newsletter and see the Affordable Care Act – Teaching Health Center Graduate Medical Education (THCGME) Program, which, according to Grants.gov, has $20,250,000 available. Twenty million: that’s not bad but isn’t spectacular either. Good enough to include in the newsletter, especially since it appears that community health centers (CHCs) and organizations that partner with CHCs are good applicants.

Then I hunt down the RFP, which is located at an inconvenient, non-obvious spot.* The second page of the RFP says there is $230,000,000 available—about ten times as much as the Grants.gov listing. That’s a huge difference. So huge that I’m using bold, which we normally eschew because it’s primarily hacks who have to resort to typographical tricks to create impact. But in this case, the magnitude of the difference necessitates extreme measures.

If you see an RFP that looks interesting, always track down the source, even if the amount of money available or number of grants available doesn’t entice you. Don’t trust grants.gov. As with chatting up strangers in a bar, you never know what you’ll find when you look deeper.


* This is why subscribing to our newsletter is a good idea: I do this kind of tedious crap so you don’t have to.

Posted on Leave a comment

Grants.gov Dies Again: The Race to the Top-District (RTTT-D) Competition Eschews the Feds’s Main Grant Portal

Last year, Isaac noted this about the vaunted Race to the Top-District competition:

Perhaps the strangest aspect of this oddball RFP process were the submission requirements. For reasons obscured by the fog of government ineptitude, the Department of Education chose not to use its G5 system, which recently replaced their “eGrants” digital submission portal, or our old pal, grants.gov.

Instead, we were suddenly back in 1997, with a requirement for an original and two hard copies, along with the proposal files on a CD! I guess the Department of Education has not read the digital memo about saving paper. One proposal we completed was 270 pages, with appendices. Another was 170 pages.

This year, page 5 of the RFP (as paginated at the footer; as paginated by Word, it’s page 6) says:

Applications for grants under this competition must be submitted in electronic format on a CD or DVD, with CD-ROM or DVD-ROM preferred, by mail or hand delivery. The Department strongly recommends the use of overnight mail.

We’ve had Grants.gov for about a decade. Every time we hear about government interest in technology and transparency and environmentalism, we think about putting a plastic disk in a FedEx envelop and launching it by truck/jet/truck to the Department of Education, where it is printed.

A lot of carbon emissions and folderol could be eliminated by a Grants.gov upload. The Department of Education also warns that, if they can’t open the files on your CD and print your application, they’ll simply throw it out.

Last year, by the way, it took us—people who do this all the time—hours to figure out how to create a technically correct submission package. We’ve learned, through blood and tears, the challenges of Grants.gov. Now we’ve got yet another weird system, courtesy of Arne Duncan’s bureaucratic brain trust, to slay.

Posted on Leave a comment

How I track needs assessments and other grant proposal research

Bear with me. I’m about to discuss a topic that might recall horrific memories from high school history or college English, but I promise that, this time, I’m discussing research methods that are a) simple and b) relevant to your life as a grant writer in a nonprofit or other setting.

The single best way I’ve found to track grant research is described in Steven Berlin Johnson’s essay “Tool for Thought.” You can safely go read Johnson’s essay and skip the rest of this post, because it’s that good. I’m going to describe the way Johnson uses Devonthink Pro (DTP) and give some examples that show how useful this innovative program is in a grant writing context.

The problem is this: you’re a grant writer. If you’re any good, you’re probably writing/producing at least one proposal every three months, and there’s a solid chance you’re doing even more than that—especially if you have support staff to help with the production side of proposals. Every proposal is subtly different, yet each has certain commonalities. Many also require research. In the process of completing a proposal, you do the research, find a bunch of articles and maybe some books, write the needs assessment, and cite a bunch of research in (and perhaps you also cite research) in the evaluation section or elsewhere, depending on the RFP.

You finish the proposal and you turn it in.

You also know “One of the Open Secrets of Grant Writing and Grant Writers: Reading.” You see something about your area’s economy in the local newspaper. You read something about the jobs situation in The Atlantic. That book about drug prohibition—what was it called again? Right, Daniel Okrent’s Last Call—has a couple of passages you should write down because they might be useful later.

But it’s very hard to synthesize any of this material in a coherent, accessible manner. You can keep a bunch of Word documents scattered in a folder. You can develop elaborate keyword systems. Such efforts will work for a short period of time; they’ll work when you have four or five or six proposals and a couple dozen key quotes. They won’t work when you’ve been working for years and have accumulated thousands of research articles, proposals, and quotes. They won’t work when you know you need to read about prisoner re-entry but you aren’t sure if you tagged everything related to that subject with prisoner re-entry.

That’s where DTP comes in: its “See Also” function, which performs associative searches on large blocks of text to find how things might be related in subtle ways. Maybe you use the word “jail” and “drugs” without using the word “prisons” in a paragraph. If you search for “prisons,” you might not find that other material, but DTP might. This is a contrived example, but it helps show the program’s power.

Plus, chances are that if you read an article six years ago—or, hell, six months ago—you’re probably not going to remember it. Unless you’re uncommonly organized, you’re not going to find the material you might really need. DTP lets you drop the information in the program to let the program do the heavy lifting by remembering it. I don’t mean to sound like an advertisement, but DTP works surprisingly well.

Let’s keep using the example I started above and imagine that your nonprofit provides re-entry services to ex-offenders. You’ll probably end up writing the same basic explanation of how your program conducts intake, assessment, plans, service delivery, and follow-up in a myriad of different ways, depending on the funder, the page limit, and the specific questions being asked. You want a way to store that kind of information. DTP does this very well. The trick is keeping text chunks between about 50 words and 500 words, as Johnson advises. If you have more, you won’t be able to read through what you have and to find material quickly.

Consequently, a 3,000-word project services section would probably overwhelm you next time you’re looking for something similar. But a 500-word description of your agency’s intake procedure would be very manageable.

The system isn’t perfect. The most obvious flaw is in the person doing the research: you need a certain amount of discipline to copy/paste and otherwise annotate material. This might be slow at first, because DTP libraries actually get more useful when they have more material. You also need to learn how to exploit DTP to the maximum feasible extent (free proposal phrase here). But once you’ve done that, you’ll have a very fast, very accurate way of finding things that can make your grant writing life much, much easier. (Incidentally, this is also how I organize blog posts, and DTP often refers me back to earlier blog posts I would otherwise have forgotten about).

Right now, DTP is only available on MacOS, but there is similar functionality in programs like Evernote or Zoho Notebook, which are cross-platform. I can’t vouch for these programs because I’ve never used them, but others online have discussed them. DTP, if used correctly, however, is a powerful argument for research-based writers using MacOS.

Posted on 3 Comments

A Lesson in Passthrough Funds and Capacity Building: ACF’s Non-Profit Capacity Building Program NOFA

If you read this week’s grant newsletter, you probably saw the NOFA for the Administration for Children and Families’s “Non-Profit Capacity Building Program,” which I first thought meant “pass-through funds,” since the purpose is “to increase the capacity of a small number of intermediary grantees to provide specific assistance to improve the sustainability of and expand services provided by small and midsize nonprofits in communities facing resource hardship challenges.” Do you know what would help those agencies? Money.

Unfortunately, I was wrong: it initially looks like pass-through funds but isn’t.

If you dig into the NOFA, you’ll find that “specific assistance” means that applicants should propose activities like “a comprehensive strategy of various learning activities and methods to increase the knowledge, skills, and abilities of recipients to implement performance management systems as well as any other best practice areas to target for improvement.” If I were a small nonprofit, I’d prefer that “specific assistance” mean “direct funding,” but here it doesn’t; the best you can do is use “a small portion of funds [. . .] to provide minor capital investments in the capacity of certain recipients such as the purchase of specific software or systems to improve infrastructure.”

I’m guessing that, if you surveyed the nonprofits “facing resource hardship challenges” to be “helped” by well-meaning but paternalistic intermediary organizations if they’d prefer “various learning activities and methods” or “cold, hard cash,” they’d prefer the latter. Isaac has written extensively on the challenges and opportunities nonprofits face in the current climate, and a dearth of training hasn’t been one. As he said, when donations and contracts dry up, smart nonprofits turn to grants. Less smart ones disappear. I think the ACF’s nominal purpose in running this program is to help small nonprofits. Its real purpose, however, is to help the intermediate nonprofits that are supposed to run a variation on train-the-trainers.

How do you do that? The NOFA itself says that “applicants will focus their organizational development assistance program on developing and implementing performance management systems that enable organizations to measure their progress and improve their performance towards intended outcomes.” So it wants nonprofits to basically act like Accenture, IBM Global Services, or the other big consultants that are frequently the target of Dilbert. We’ve written a number of funded proposals over the years to do activities like this, and one key is understanding what I’ve laid out above: you’re passing out training, not money.

You don’t see a huge number of pass-through awards because they just increase administrative friction: ACF is paying staffers to write the RFP, review applications, and so forth, it isn’t going to give grants to “intermediary” nonprofits to… write a mini-RFP solicit applications, review applications, and so forth, probably to the tune of 10 – 30% of the grant. You’ll find pass-through grants at the state level, but very rarely lower than that.

Foundation appeal clients occasionally want to run variations on pass-through programs. Some clients, for example, will provide scholarships to people with a particular illness, like Groat’s disease. We tell them not to do this, however, because if the funder wants to fund any kind of cash payment scheme, they’ll do so directly and cut out the middleman. You want to look like something more than the middleman. Foundations mostly like direct services. As the “Non-Profit Capacity Building Program” shows, so do the feds.

Posted on 11 Comments

Why Winning an Olympic Gold Medal is Not Like Getting a Carol M. White Physical Education Program (PEP) Grant

A .0001 second difference can separate an Olympic Gold Medalist from a Silver Medalist for swimming, and a five minute difference may separate her and the hapless competitor from Lower Slabovia. The fastest swimmers win medals and the slowest swimmers get new Speedos. Think of the intrepid ski jumper, Eddie the Eagle, in the 1984 Winter Olympics. He didn’t come close to winning a medal, but he seemed to enjoy competing and falling off the ski jump.

Many grant applicants are under the delusion from years of watching the Olympics and similar sports competitions that, if their application receives the highest review score, the grant will automatically be awarded. But regardless of what is true in the real world,* the proposal world is different.

We recently completed a Carol M. White Physical Education Program (PEP) proposal for a small, rural Midwestern school district (or local education agency (LEA) in edu-speak). Our contact, the superintendent, was an amiable fellow with about 30 years of experience as a school superintendent and about 30 minutes of experience as a grant applicant. When chatting at the end of the assignment, he said something along the lines of, “I hope our application gets the highest number of points so that we get funded.” I put him on hold, opened up the RFP, and found this version of the bad news language I knew would be lurking somewhere (in this case on page 127 of 152, in Section 5506, “Administrative Provisions,” Subpart b, “Proportionality,” rather than “grant award procedures,” where one would expect it):

(b) PROPORTIONALITY- To the extent practicable, the Secretary shall ensure that grants awarded under this subpart shall be equitably distributed among local educational agencies and community-based organizations serving urban and rural areas.

I explained to our incredulous client that grant awards are often made for reasons other than high point totals. In example above, the Department of Education is reserving its right to use “proportionality” regarding “urban and rural areas” to divvy up the pot. I have no idea what “proportionality” means in this context, other than it can be used to make an award to any applicant the Department feels like funding.

There is a caveat: the applicant usually has to submit a technically correct proposal and reach the minimum score. After that, apparently, anything can go. Funding decisions are often made for all kinds of reasons: urban/rural (in the example cited above, I guess no suburban applicants will be funded, since suburbs are not mentioned as a possibility), politics (upcoming elections tend to grab the attention of federal decision makers), geography (Senator Foghorn Leghorn to Secretary Arne Duncan: “Tell me again, Mr. Secretary, why have no PEP grants have been awarded in Alabama in five years?”), perceived or stated target population (e.g., African American, Latino, children with special needs, etc.), experienced/inexperienced applicants, and who knows what else.

Our client was a bit crestfallen when I explained the above, but I told him to cheer up. We think we helped him submit a technically correct proposal, which is no small achievement given the fantastic complexity of the PEP RFP and spectacularly confusing directions. His district is also fairly representative of other small, rural school districts. If his application is one of only a few technically correct proposals from similar school districts in his state/region, the chances of funding will go up enormously. Since I know from decades of experience that many more urban districts are likely to apply for PEP than rural districts, and a lot of these are likely to screw up their applications, our client’s chances are probably pretty good. I’ll find out along with everyone else when the funding announcements are made in a few months, because, as I always tell callers, we’re grant writers, not fortune tellers.

In case you think I’m picking on PEP, here are a few other examples of the same weasel words from other recent federal and state RFPs selected at random for this post:

  • From the “Office of Safe and Drug-Free Schools Grants for the Integration of Schools and Mental Health Systems” RFP: “Review and Selection Process: Additional factors we consider in selecting an application for an award are the equitable distribution of grants among the geographical regions of the United States and among urban, suburban, and rural populations.”
  • From the “Intellectual Property Enforcement Program: FY 2010 Competitive Grant Announcement:” “Absent explicit statutory authorization or written delegation of authority to the contrary, all final grant award decisions will be made by the Assistant Attorney General (AAG), who may also give consideration to factors including, but not limited to, underserved populations, geographic diversity, strategic priorities, past performance, and available funding when making awards.”
  • From the “Teen Pregnancy Prevention Community Challenge Grant (CCG) Program” from the California Department of Public Health: “Additionally, OFP will seek to achieve equitable and balanced funding via geographic distribution across California at its discretion.”

To try this exercise at home, put on safety glasses and a rubber apron, then search for the words “the secretary” or “geographical” in almost any federal RFP and you will find some version of the above.

This curious aspect of grant writing can play out in strange ways, as confirmed in this recent Wall Street Journal article by Jonathan Weisman and Alex P. Kellogg, “Obama Courts Stimulus Doubters”. Oddly, the relatively nondescript Holland, MI, is, according to this article, “a community awash in stimulus dollars.” Holland “has seen a big infusion of cash from the president’s economic stimulus plan: hundreds of millions of dollars for new automotive battery plants, tens of millions for schools, as well as millions more for housing, small businesses, university research and transportation.”

Pretty strange for a City with a population of about 20,000 in Ottawa County, which has around 250,000 residents. Call me cynical, but, unless there is a hidden nest of grant writers in Holland, the reason for this tsunami of stimulus dollars is likely because this region in Michigan used to have lots of automotive-related manufacturers, most of which have long since gone the way of the Studebaker. It would make a great story, particularly for the 2012 election, if a sprinkling of federal fairy dust in the form of stimulus grants caused green job industries to flourish.

While I have no way of confirming this, I suspect there are pin maps in various federal agencies with a bullseye on Holland and other charmed communities. As Bob Dylan put it in Idiot Wind, “I can’t help it if I’m lucky.” It seems Holland is lucky and, while grant applicants can’t make their luck, they can work hard to submit compelling, technically correct proposals, ideally, with some aspect of program design that makes them stand out, and wait for that congrats phone call from their congresswoman letting them know that the Secretary of Whatever Federal Department has used “other factors” to shove their proposal to the top of the funding heap.

But this assumes their proposal is complete and technically correct. Until you get at least that far, you have virtually no chance at all.

EDIT: Also see our follow-up post, “True Tales of a Department of Education Grant Reviewer.”


* For an incredibly confusing take on the “real world” versus the “non-real world of dreams,” pack an overnight bag and go see the imaginative, but interminable Inception. Jake observed that none of the characters use computers or cell phones in this terminally hip film, while I noted that all the male actors wore suits and there was no swearing or sexual situations. It is like being in an IBM sales office circa 1970. Too bad Ross Perot didn’t have a cameo.