I’m watching an oncoming train wreck in slow motion.
Recently, the Commonwealth of Massachusetts launched an experiment that will reward two groups of social service nonprofits if they can prove their techniques work, or pay them little or even no money if they fail.
That is a risky proposition and if I were a nonprofit executive, I’d be nervous:
The agencies will fund the upfront cost of their programs by borrowing money from foundations, philanthropists, and other investors, who will receive a portion of the additional payments from the state if these efforts succeed. If they don’t, the investors will likely be out their upfront investment.
The idea is to transfer the risk of failure from the state to the financial backers of nonprofits, and to insulate taxpayers from programs that don’t work.
Left unsaid is that the nonprofit’s credibility will also take a hit. That’s a bigger risk than investors losing modest amounts of money.
So what is driving this experiment in risk management? Two things:
- New technology and data-collection efforts can more accurately measure outcomes.
- The poor economy has encouraged states to adopt a business mindset.
So here’s the oncoming train wreck: The Massachusetts Legislature authorized up to $50 million on these initiatives with the goal of better results at a lower cost, and taxpayers only pay if certain benchmarks are reached.
Does this mean Massachusetts is going to tell these groups what the outcomes should be and what to measure so the investors can feel their money was well-spent?
Don’t get me wrong: Statistics are necessary because you want to be sure that your programs are doing what you think they’re doing. These days, though, it seems that the list of required statistics grows longer as funders want 100% certainty that their grants are being used effectively.
There’s a saying among statisticians: Correlation is not causation. Or, just because A and B are related doesn’t mean that A caused B. If a child in your after school tutoring program gets an A in math, your program probably contributed to that outcome yet there are other variables that likely contributed as well that you’re not measuring.
This is the sort of thinking that sets Mazarine Treyz’s hair on fire. Treyz, who runs the Wild Woman Fundraising site, posted recently:
Real, lasting change cannot be reduced to a single metric like overhead or numbers of people “served”. Changing a culture or an institution is typically too sloppy, random, never-ending, and elusive to be captured by a mathematical formula or metric.
This is one of the things I find incredibly frustrating about working on social issues like homelessness or addiction. You cannot reasonably expect someone who has been abusing drugs or alcohol for 20 years to clean themselves up in 30 days. If a client relapses while in your program, does that mean your program isn’t successful? No; that’s the nature of addiction. So when would the outcome be considered “achieved”?
Another example: If your organization advocates for the civil rights of immigrant women and children who are fleeing violence, there isn’t necessarily an economic outcome attached to that mission even though it is a valuable outcome. So where would this type of organization fit in a “pay for performance” scheme?
Another concern I have is that this hyper-focus on data could end up stifling innovation even more than it already is: “It’s a near-religious belief that organizations must not risk donor funds intended for charitable purposes on some new endeavor that might lose money.”
Do we really think we can solve societal problems by applying market-based solutions? Count me among the skeptics.
I’m going to let Alison Bernstein have the last word from her article, “Metrics Mania: The Growing Corporatization of U.S. Philanthropy” (hat tip to Mazarine):
The challenge posed by metrics mania and false bottom lines is the assumption of a one-size-fits-all model. Foundations are too diverse and the problems they hope to address effectively are too complex to be reduced to a metric model.
Thanks for reading!