Research Blog

Measurement-related studies, resources, and initiatives

Subscribe by Email

Your email:

Current Articles | RSS Feed RSS Feed

Monetizing brand value

  
  
  

Q. We have many anecdotes that suggest our programs positively affect our company’s brand.  We track PR impressions, but how do we calculate ROI?

Companies spend billions of dollars a year on advertising and PR to build and protect their brands.  And for good reason: brand can communicate value to customers, recruits, or business partners — and thus prove vital to increasing (or protecting) sales or reducing costs.

The key insight here is that brand strength is not an end in itself; rather, it is valuable only to the degree it influences behaviors that in turn benefit the bottom line.  It is these bottom-line outcomes that you’ll need to track (or estimate) in order to monetize the value.

So, your tracking of media impressions is a good start.  Your goal, however, should be to gather more information along the chain of assumed cause-and-effect: Are these impressions reaching your target audience?  Are they generating awareness or changing attitudes?  And ultimately, are they changing behaviors among customers, recruits, business partners, or other stakeholders in ways that either increase revenues (e.g., increase margins or sales volume), or reduce costs (e.g., decrease recruiting costs, training costs, or issues management costs).

Impractical?  Not necessarily.  Keep in mind that “more information” doesn’t mean “perfect information.”  Indeed, try the following exercise: map out your cause-and-effect “logic model” and then make up estimates for each juncture point.  You had a million impressions.  What percentage was your target audience?  Put 50%.  What percentage of those people reached now gained new awareness of your company?  Put 50% again.  And so on.  At the end of the chain will be a number the represents how many people have changed a sales- or cost-related behavior according to your logic model.  Multiply this number by average sales increase or average cost savings, and the result represents the monetary value of the particular brand impact.

Of course, this calculation is meaningless because you’ve put arbitrary numbers in your model.  But now consider how you might improve your various estimates.  Does the PR, advertising, or marketing departments have results from past campaigns, research reports, or pilot tests to inform your model?  How about industry averages?  Even pure speculation by experienced colleagues can be valuable.  Often a bit of scavenging can surface valuable data, and create useful ballpark estimates.  And if more precise data are required, your logic model can help guide a more formal data-collection process.

(NB: Macro-level brand valuation methodologies – such as pioneered by Interbrand – deduce the earnings value attributable to a company’s brand overall.  Although useful in certain contexts, such methodologies are not capable of measuring the impacts of specific programs or activities.)


true impact

But Does It Work? How best to assess social program performance

  
  
  

As usual, Jed Emerson pushes the ball forward with his cogent article on assessing program performance, “But Does It Work?” (Stanford Social Innovation Review, Winter 2009 - subscription required).

By describing the Edna McConnell Clark Foundation’s three-level hierarchy of effectiveness, he shares a valuable roadmap for outcome measurement among nonprofit programs: apparent effectiveness (tracking outcomes), demonstrated effectiveness (comparing the outcomes to those of program non-participants), and proven effectiveness (comparing participant and non-participant outcomes via formal, randomized controlled trials).

However, for the vast numbers of nonprofits that forgo evaluation entirely because such concepts seem overwhelming and beyond their capacity, perhaps codifying a concept of “estimated effectiveness” as a preliminary assessment stage might help.

In our experience, encouraging such organizations to at least estimate their own outcomes based on the results of programs successfully implemented elsewhere promotes the kind of critical thinking and “logic modeling” that too often goes ignored.  The simple process of mapping cause-and-effect assumptions not only does worlds of good for strategic planning and program design, but frequently also simplifies what inputs, outputs, and outcomes might be tracked to achieve that first – often surprisingly accessible – stage of measuring effectiveness.

true impact

All Posts