When trying to understand your social impact–as a donor or a nonprofit–there is a tension between what counts as impact and what doesn’t.
On the one hand, reporting simple activity numbers—like dollars spent or hours volunteered, known as inputs—falls short of showing actual results.
On the other, expecting nonprofits to conduct lengthy academic studies just to satisfy funders is often impractical and runs foul of Trust-Based Philanthropy.
The sweet spot lies somewhere in between: metrics that are impactful yet achievable.
We like to visualize this tension in a diagram we call The Impact Continuum.
To help you determine if your process hits that balance, we suggest starting with three key questions. These questions are designed to help you think critically about the data you’re gathering, ensuring it’s not only manageable but also meaningful.
Are you an investor or a management consultant? (Outcome vs. process measures)
Consider the following standard philanthropy metrics: how much money you invested, to how many nonprofits, funding how many trainings, which accompanied how many donated goods or services, delivered to how many beneficiaries. Whew! That’s a lot of data. But is it useful?
If you’re like most corporate foundations or community relations departments, not really. Most likely, you are an investor interested in achieving specified goals in areas like education, the environment, health and social services, or the arts. Instead, seek metrics that reflect a philanthropic investor mindset: for example, how many homeless are housed, how many kids reach reading proficiency, or how diseases and their ripple effects are avoided due to your investments.
If we’ve learned nothing from the Trust-Based movement, it’s to leave process determination to the organizations and individuals with boots on the ground (AKA: your nonprofit partners).
Is perfection holding you back? Behold the power of proxy data.
Will your board wait 20 years for you to report on the long-term outcomes of your investments in early childhood education? Or even for those investments with more immediate social impact, does your staff have the time, expertise, or funding to help all of your grantees design and implement randomized controlled trials (i.e., the academic gold standard of evaluation) to isolate your social impacts? Probably not.
Rather than give up on outcome measurement as impractical, consider the power of proxy data: results from other similar activities.
Your nonprofit partners can rely on program sampling, historical performance, or third-party studies of comparable programs to make reasonable estimations of what their own programs are currently achieving. (By the way, this is standard operating procedure in virtually every other business function that also lacks access to perfect information.)
You'll be in good standing if you are transparent about your estimations and your sources are credible. A helpful hint can be to incorporate some simple, short-term performance measures into your programs to help make sure they are tracking with your assumptions.
Are your reporting guidelines generating mere data points or useful information?
Let’s imagine a hypothetical. Let’s say your board is filled with hard-nosed business types who live and breathe bottom-line value, accountability, and continuous improvement. So, you’ve structured your grantee reporting guidelines with long-form questions that attempt to understand outcomes in 3,000 words or less:
- What was the impact of your program?
- What are your greatest successes?
- How did your program create value for your beneficiaries, and how did your actual performance compare with your expectations?
Such open-ended questions are seductive: they invite the grantee to think expansively about their impacts, potentially resulting in a richer haul of results to report. But the problem is the responses will be in narrative form, which can’t be rolled up, compared, or otherwise data-vizzed like numbers can.
Long-form narrative questions leave it to each grantee to define their progress terms on their own, invariably resulting in a mix of “apples and oranges” (making the responses incompatible, even if they could be rolled up). What’s left is a lot of data (unorganized facts) but not many insights (data structured in a useful, meaningful way).
Using close-ended questions that guide nonprofits to standardized units of measure–e.g., How many individuals achieved sustainable jobs through your program?–allows you to do two things at once:
- Calculate the cumulative results of investments across multiple programs quickly
- Promote continuous improvement by identifying those program designs that achieve the greatest success which you can then share with those investments that are underperforming.
In our 20+ years of experience at True Impact, these are fundamental principles and practical techniques for establishing metrics that help prove and improve the value of your community investments.
Want to see how our platform can make this easy and produce meaningful results that you can use to prove the value you are creating for your company and community? Let’s find time to connect and talk through it.