Tuesday, October 11, 2011
Academics sometimes hit the nail on the head! University of Wisconsin professor Donald Moynihan, a thoughtful observer of the evolution of performance management in the U.S, along with colleague Stephane Lavertu from Ohio State, examine historical GAO sur
Academics sometimes hit the nail on the head! University of Wisconsin professor Donald Moynihan, a thoughtful observer of the evolution of performance management in the U.S, along with colleague Stephane Lavertu from Ohio State, examine historical GAO survey data to understand why recent federal performance improvement initiatives haven’t resulted in the hoped-for increase use of performance information to make decisions

Will the third time be a charm?  Moynihan and Lavertu dig behind the data to find out why the first and second efforts to embed the use of performance information into the government faltered, and offer hints as to whether the most recent effort – the GPRA Modernization Act of 2010 -- will do any better.

Background

Moynihan and Lavertu say the Government Performance and Results Act of 1993 (GPRA) was created “at least in part, to foster performance information use.”   The Act requires agencies to create strategic plans, annual performance plans, measures of progress, and annually report on their progress.  The original Act, however,  “. . . was subsequently criticized by the Bush and Obama administrations and Congress for failing in this task,” the authors observe.  Agencies developed plans, measures, and reports, but this did not seem to generate much use of the data by managers.

The Government Accountability Office (GAO) validated the weak use of GPRA-created performance information in a series of surveys of federal managers, which have been conducted about every three years since 1996.  These surveys found that only about 40 percent of managers use performance information when making decisions, and this figure has adeclined somewhat over time.

The Bush administration, in its effort to spur an increased use of performance data, tried a different approach.  Rather than collecting and reporting data agency-wide, they focused at the program-level, creating a Program Assessment Rating Tool (PART) to establish effectiveness ratings for more than 1,000 major government programs.  This, however, did not generate additional usage of performance data either.

The third attempt to spark the use of performance data is currently underway.  The GPRA Modernization Act of 2010, signed by President Obama earlier this year, mandates agencies to conduct quarterly management reviews of the progress of “priority goals” set by their leadership.  Whether this will have the intended effect will likely not be known for several years.

The GAO Surveys

GAO administered surveys to federal employees on their experience with performance information in 1996, 2000, 2003, and 2007.  These surveys of federal managers’ use of performance information are interesting in terms of how responses changed over time (between 1997 and 2007).  In its 2007 report, GAO observed:

“. . .  there were two areas relating to managers’ use of performance information in management decision making that did change significantly between 1997 and 2007. First, there was a significant decrease in the percentage of managers who reported that their organizations used performance information when adopting new program approaches or changing work processes. . . .  Second, there was a significant increase in the percentage of managers who reported that they reward the employees they manage or supervise based on performance information.”

Analytic Findings

So while GAO found federal managers did not increase their use of performance information, but they were not able to answer why.  This is what Moynihan and Lavertu try to tease out of the data.

Moynihan and Lavertu start with theory:  “Organizational theory suggests that behavioral change among employees can be fostered by altering their routines. . . GPRA and PART, in different ways, both established organizational routines of data collection, dissemination, and review.”  The GPRA Modernization Act and the Obama administration’s performance management initiatives “continue to be premised on the notion that establishing performance management routines is central to promoting performance information use.  The key difference with the earlier reforms seems to be the choice of routines employed.”

“We estimate a series of [statistical] models using data from surveys of federal employees administered by the Government Accountability Office (GAO) in 2000 and 2007.”  In their models, they assessed the extent to which survey respondents engaged in “purposeful” use of performance information versus a “passive” use – e.g., doing the minimum required to comply with the procedural requirements (but not really using the data).

They found that the original GPRA and PART “were more highly correlated with passive rather than purposeful performance information use. . . “  They found the routines created by GPRA and PART did not sufficiently influence managerial behavior. But they say “this does not mean that routines do not matter. . . The ‘stat’ approach to performance management [as required by the quarterly review process in the new GPRA requirements] involves embedding such routines. . . “  

But what was really interesting in their statistical analyses was that “Additional attention from OMB actually seems negatively related to other forms of use, such as problem-solving and employee management.”  That is, by focusing on compliance, OMB may actually be undercutting the use of GPRA for management!

What Really Matters

Moynihan and Lavertu’s analysis offers a sobering conclusion:  “government-wide performance reforms such as PART and GPRA have been most effective in encourage passive forms of performance information use, i.e., in directing employees to follow requirement to create and improve performance information.”

In the end, the authors say that their findings “tell us something about the limits of any formal government-wide performance requirements to alter the discretionary behavior of individual managers. . . .”

But how can success occur?  The authors observe that “a series of organizational factors – leadership commitment to results, the existence of learning routines led by supervisors, the motivational nature of task, and the ability to infer meaningful actions from measures – are all positive predictors of performance information use.”

But as good academics, they end by saying more research needs to be done.  So here’s where you can follow it happening!