University of Chicago Professor Harold Pollack addresses how the term "metric" can be used in two different ways -- and thus serve as either a force for good or ill -- in this post about job training programs.
A snippet characterizing the all-too-common "ill" side:
The federal government and other funders indeed require an incredible profusion of program activity data. These data are useful to document that you’ve actually spent their money to deliver services, and to characterize the people you have served. Such performance data–metrics, if you will–are collected in nice binders and are placed on the shelf, where they generally reside, blissfully undisturbed.
These binders don’t get much use because they can’t really tell policymakers such as Senator Conrad what’s actually working and how we should target (say) job training resources to do the most good.
In the social sector, one of the contributors to such DRIP (data rich, information poor) situations is the traditional logic model. These cause-and-effect illustrations can be useful for fleshing out how an organization expects their investment will ultimately improve a social condition (their "theory of change"), but too often lead to overly complex and burdensome measurement regimes, focused on input and output process metrics at the expense of outcome metrics.
We look forward to addressing this issue in greater detail soon. For now, consider the simplified logic model structure we've incorporated into an online tool.