Tracking the Business and Social ROI of Grantmaking

CECP 2012 Philanthropy Summit

This month’s excellent 2012 CECP Corporate Philanthropy Summit included plenty of room for tackling important measurement issues.

As host of one of the breakout sessions—Tracking the Business and Social ROI of Grantmaking—I had the pleasure of leading a rich, interactive discussion among practitioners from companies such as Citigroup, Dow Chemical, Credit Suisse, and Lockheed Martin, in which we explored several key challenges and potential solutions.  A few of the concept highlights:

  • Are you an investor or a management consultant? (Outcome vs. process measures).  Consider the following common philanthropy metrics: how much money you invested, to how many nonprofits, funding how many trainings, which accompanied how many donated goods or services, delivered to how many beneficiaries.  Whew!  That’s a lot of data.  But is it useful?

    If you’re like most corporate foundations or community relations departments, not really.  Most likely, you are an investor interested in achieving specified goals in areas like education, the environment, health and social services, or the arts.  So, seek metrics that reflect those interests: for example, how many homeless are housed, how many kids reach reading proficiency, or how diseases and their ripple effects are avoided as a result of your investments.  With limited staff, you’re not in a position to analyze the processes used to achieve these outcomes and provide improvement guidance.  Leave that to the management consultants – so, you might as well leave the process measures to them as well.
     
  • Is perfection holding you back?  (The power of proxy data).  Will your board wait 20 years for you to report on the long-term outcomes of your investments in early childhood education?  Or even for those investments with more immediate social impact, does your staff have the time, expertise, or funding to help all of your grantees design and implement randomized controlled trials (i.e., the academic gold standard of evaluation) to isolate your social impacts?  Probably not.

    But rather than give up on outcome measurement as impractical, consider the power of proxy data: results from other activities – for example, program sampling or historical performance, or third-party studies of programs similar to your own – that permit you to make reasonable estimations of what your own programs are currently achieving.  (And by the way, this is standard operating procedure in virtually every other business function that also lacks access to perfect information.)  As long as you are transparent about your estimations – and your sources are credible – you’ll be in good standing.  Just try to incorporate some simple, short-term performance measures into your programs to help make sure they are tracking your assumptions.
     
  • Are your reporting guidelines generating mere data points or true information? (Ensuring useful results).  Your board is stuffed with hard-nosed business types who live and breathe bottom-line value, accountability, and continuous improvement.  So, you’ve structured your grantee reporting guidelines with questions that have a laser-focus on outcomes: What was the impact of your program?  Describe your greatest successes.  How did your program create value for your beneficiaries, and how did your actual performance compare with your expectations?

    Such open-ended questions are seductive: they invite the grantee to think expansively about their impacts, potentially resulting in a richer haul of results to report.  But the problem is, the responses will be in narrative form (which can’t be rolled up, like numbers can) and they leave it to each individual grantee to define the terms of their own progress, invariably resulting in an mix of “apples and oranges” (making the responses incompatible, even if they could be rolled up).  What’s left are a lot of data (unorganized facts) but not much information (data structured in a way that is useful).

    Using more close-ended questions that prescribe the units of measure – e.g., How many previously unemployed individuals achieved sustainable jobs (i.e., lasting over 12 months) through your program? – allows you to easily calculate the cumulative results of investments across multiple programs, and also promote continuous improvement by identifying those program designs that achieve the greatest success which you can then share with those investments that are underperforming.

In our experience at True Impact, these are some of the fundamental principles and practical techniques for establishing metrics that help both prove and improve the value of your community investments.

What would you add?

New Call to action