Monday, March 28, 2016
I’m a fervent advocate of the use of performance information as an important management and decision making tool. So, it’s not a natural act for me to raise this kind of question.

However, several articles in a recent issue of Public Administration Review have caught my attention and challenge some of my assumptions. 

Performance measurement is about measuring the performance of government and its programs – but what about measuring the effectiveness of performance measures themselves?  Ironically, it turns out that it is harder than you might think.

Business gurus Tom Peters and Peter Drucker have often been cited as saying: “What gets measured gets done.”  That has been a mantra of many managers over the years.  But the authors of these articles show that it isn’t that simple. . . . and maybe not even true.

 

Background.  First, though, why am I an enthusiast?  I have seen cases where performance management has been used to help organizations focus on and achieve results.  In recent years, the federal government has documented instances where the introduction of performance management systems has led to:

  • Reduced crime on Indian reservations
  • Increases in the use of electronic deposits by federal aid recipients
  • Virtually eliminating chronic homelessness among veterans
  • Reduce the instances of smoking
  • Improving health and education outcomes
  • Reduce wait times of government customers

And when combined with analyses of non-performance data (such as weather or day of the week) and using evidence-based approaches with big data, it has led to the prediction and reduction of crime in local communities.

But shouldn’t there be a longer list?  Both the Chief Financial Officers Act of 1990 and the Government Performance and Results Act of 1993 (GPRA) presumed that if financial and performance information were made available to career managers and political decision makers, that better decisions would get made.  Reports over the past 20 years by the Government Accountability Office have found that only about one-third of managers in the executive branch use performance information to any great extent.  The Congressional Research Service concludes that congressional use “remains to be seen.”

New laws – the GPRA Modernization Act of 2010 and the Digital Accountability and Transparency Act – have tried to bridge this lack of use.  The first requires senior leaders to convene regular forum to have performance-related conversations and decisions, and public reporting on a more frequent basis. The second presumes that having more immediate, more granular financial information would be more useful to managers. 

This all brings us to a pair of recent academic articles. 

 

Measuring Doesn’t Matter (Part 1).  Professor Ed Gerrish, at the University of S. Dakota, examines in his article “the current state of performance management research” using quantitative techniques.  He concludes: “The act of measuring performance may not improve performance, but managing performance might.”

To come to this conclusion, he conducted an Internet search based on selected key words yielded 24,737 records, but only 49 were acceptable studies on public performance management to be included in final analysis.  There were 2,188 “effect sizes” reflected in these 49 studies, and these “effects” were the units used in his analysis. The 49 studies were sorted by six policy areas (such as law enforcement, healthcare, and social services), then ranked by date of publication (1988 – 2014).

He found that the effects of just measuring performance was “typically considered to be negligible.”  However, he found that: “performance management systems that use best practices are two to three times more effective than the ‘average’ performance management system.”

Best practices included:  training, leadership support, the systems are voluntary, and are mission-oriented.

Does the policy area matter as to whether the use of performance management is more effective? “It does not appear that the impact of performance management in particular policy areas is nearly as systematic as how they are implemented.”

“Meta-regressions find that performance management system tend to have a small but positive average impact on performance in public organizations.” . . . “When combined with performance management best practices . . . the mean effect size is much larger; two or three times as large.” . . . while this may seem small, “it is larger than other recent examinations of policy interventions.” 

So, the impact is at least positive!  Still, Dr. Gerrish hedges his conclusions, noting: “there is still unexplained variation in the impact of performance management on performance in the public sector.”

In consulting with some colleagues and experts in the field, they noted that the studies were done before the Modernization Act took effect (which they think will have a positive impact on the use of performance information – at least hope springs eternal). And that the literature review did not take into account studies undertaken by governmental audit or evaluation offices, or studies by think tanks (e.g., RAND, MITRE, etc.) – where practitioners might come to different conclusions that academic articles.

 

Measuring Doesn’t Matter (Part 2).  While the conclusions of the first article might be easily explained away, the conclusions of a second article are a bit more distressing.  In this case, a pair of Dutch academics at Aarhus University, Martin Baekgaard and Soren Serritzlew, used experimental methods to determine “subjects’ ability to correctly assess information rather than on their attitudinal responses.”

Using an experimental design, they surveyed 1,784 Danish citizens, who were representative of the population as a whole.

They asked various sub-set of citizens about their prior beliefs, and then presented them with unambiguous performance information and asked them to interpret the data.  The survey asked them to interpret data involving a hip operation in a “public” vs. a “private” hospital, and rate which hospital had performed best. They reversed the labels on the data for another subset of citizens (switched “public” for “private”) and re-ran the survey.  They then asked another subset, with similar characteristics, the same performance questions, but in “Hospital A” vs. “Hospital B.”

The first sub-set of citizens was asked to interpret involved how often operations were performance with and without complications.  The survey results concluded that “complications were much more likely to occur in the public than in the private hospital” – even though the data showed the reverse.  Only when the data was labeled as covering “Hospital A” vs. “Hospital B” did the survey respondents correctly identified which hospital had better performance!

The authors noted:  “The introduction of performance information in public administrations was based on the idea that information on organizational performance may improve decision making and ultimately lead to greater public value for taxpayer money.” However the results of their survey suggest that individuals’ existing political values and beliefs “may also affect individuals’ ability to correctly interpret even unambiguous performance information.”

They conclude: “The fact that performance information is systematically misinterpreted . . .calls into question the potential of performance information. . . . The findings may also help explain why the introduction of performance information systems in many cases has limited effects on the actual performance of government institutions.”

They do posit “whether misinterpretations can be overcome, for instance, by presenting performance information in a different way.”  And another academic article in another journal, by Dr. Donald Moynihan, University of Wisconsin, does suggest that this might be the case – and the solution.  And yet another academic, Dr. Beryl Radin, Georgetown University, warns: “While the concept of evidence-based decisions may have great appeal, information is rarely neutral and, instead, cannot be disentangled from a series of value, structural, and political attributes.”  All worth pondering!

But academics aside, practitioners might say that Winston Churchill’s diagnosis of democracy might well apply to performance management:  “Democracy is the worst form of government, except for all the others.”  

 

Note:  The two articles in Public Administration Review cited and linked were made available courtesy of the American Society for Public 
Administration.