As usual, Jed Emerson pushes the ball forward with his cogent article on assessing program performance, “But Does It Work?” (Stanford Social Innovation Review, Winter 2009 - subscription required).
By describing the Edna McConnell Clark Foundation’s three-level hierarchy of effectiveness, he shares a valuable roadmap for outcome measurement among nonprofit programs: apparent effectiveness (tracking outcomes), demonstrated effectiveness (comparing the outcomes to those of program non-participants), and proven effectiveness (comparing participant and non-participant outcomes via formal, randomized controlled trials).
However, for the vast numbers of nonprofits that forgo evaluation entirely because such concepts seem overwhelming and beyond their capacity, perhaps codifying a concept of “estimated effectiveness” as a preliminary assessment stage might help.
In our experience, encouraging such organizations to at least estimate their own outcomes based on the results of programs successfully implemented elsewhere promotes the kind of critical thinking and “logic modeling” that too often goes ignored. The simple process of mapping cause-and-effect assumptions not only does worlds of good for strategic planning and program design, but frequently also simplifies what inputs, outputs, and outcomes might be tracked to achieve that first – often surprisingly accessible – stage of measuring effectiveness.