Empirically Based Intelligence Management: Using Operations Research to Improve Programmatic Decision Making

 

Empirically Based Intelligence Management: Using Operations Research to Improve Programmatic Decision Making

Empirically Based Intelligence Management: Using Operations Research to Improve Programmatic Decision Making
Empirically Based Intelligence Management: Using Operations Research to Improve Programmatic Decision Making
There are some fundamental and historic changes occurring in how organizations use data to improve performance.

Summary

Monday, July 2nd, 2012 - 15:02

On May 14, 2012, the Office of Management and Budget (OMB) provided a memorandum to all departments and agencies on the need to use evidence throughout the Fiscal Year 2014 budget submission. The memorandum provides four pages of issues and approaches for using evidence in the development, evaluation, and management of government programs. OMB also encourages agencies to strengthen program evaluation through a dedicated senior leader, such as a chief evaluation officer reporting directly to the secretary or deputy secretary.

Reasoning and decision making with data and facts are not a result of modern management science, as a host of ancient philosophers, scientists, jurists, and others have demonstrated. However, there are some fundamental and historic changes occurring in how organizations use data to improve performance. Information and communications technologies are enabling organizations to not only gather and analyze billions of data records, but also to make sense of this data in real time. New operations technologies discern context and meaning in large volumes of data. These technologies provide organizations with meaningful performance measurements. More important, they enable the best organizations to act faster, with more accuracy and efficiency.

This opportunity also means that agency leaders and their senior staffs must equip themselves with an understanding of evaluation methods and technologies. As the OMB memorandum implies, leaders cannot delegate the evaluation function down multiple levels to specialists and still expect big impacts on program performance and management. While some of the detailed work must be performed at lower levels by specialists, senior leaders must have enough understanding of methods and technologies—as well as the program content—in order to lead. A recent series of defense evaluations illustrate the point.

In 2006, the Department of Defense found itself in a “grave and deteriorating” situation in Iraq, according to the congressionally commissioned Iraq Study Group. As part of the department’s response to this crisis, the vice chairman of the Joint Chiefs of Staff—General James “Hoss” Cartwright at that time—directly oversaw a range of data-driven evaluations. Many of these evaluations focused on intelligence needs, alternative solutions to those needs, and priorities among these solutions. These evaluations directly shaped multi-billion dollar investment decisions, both for and against some capabilities. It was an intensely practical environment in which to learn about the strengths and weaknesses of evaluation methods.