Wednesday, February 17, 2021
President Biden, in one of his first actions in office, encouraged his agency heads to support evidence-based policymaking, including “evidence-building plans,” which are more commonly called “learning agendas.” What’s the back story on this?

In 2017, Congress gave tax breaks for investors in designated Opportunity Zones around the country. The idea was that this would generate substantial private sector investments in more than 8.700 lower-income census tracts. Did it work as intended? This is one of scores of research questions raised in the Department of Housing and Urban Development’s (HUD) most recent learning agenda.

HUD has pioneered the use of economic- and housing-related research and evaluation agendas since 2013.  But in 2019, a new law required all federal agencies to develop evidence-building research agendas to assess how well their programs work.  Dubbed “learning agendas,” the first set is due to Congress in early 2022.

What’s been the experience of early pioneers in developing learning agendas? What are some good models to follow? What are the potential pitfalls?  A new IBM Center report by a research team comprised of Kathy Newcomer, Karol Olejniczak, and Nick Hart tackles these questions.

Background.  A 2017 congressional commission recommended a greater use of evidence in policy decision making. One element of the commission’s recommendation was for agencies to develop research-based, evidence-building evaluation plans to methodically examine the effectiveness and impact of their various programs.  These plans are colloquially called “learning agendas,” a term used by pioneering agencies in the Obama administration.

The commission’s recommendations were incorporated into the bipartisan Foundations for Evidence-Based Policymaking Act of 2018, which mandated agencies develop learning agendas in conjunction with their strategic plans, which are refreshed every four years. As a result, the first set of completed learning agendas will be due to Congress in early 2022.

Agencies were also required to designate chief evaluation officers, who are leading the development of their agencies’ learning agendas.  The Office of Management and Budget has supported them with guidance in 2020 that includes, for example, standards for judging program evaluation practices.

Early signals from the Biden-Harris administration suggest strong support for continuing this effort. For example, in a memorandum shortly after taking office, President Biden declared: “Heads of agencies shall ensure that the scientific-integrity policies of their agencies consider, supplement, and support their plans for forming evidence-based policies, including the evidence-building plans required [by law in the Evidence Act].”

What Is a Learning Agenda? OMB guidance in 2019 describes learning agendas as multi-year evidence-building plans that would be “a systematic plan for identifying and addressing policy questions relevant to the programs, policies, and regulations of the agency.” While OMB did not prescribe a format, it did note that such plans would need to address the following elements:

  • A list of policy-relevant questions for which the agency intends to develop evidence to support policymaking.
  • A list of data the agency intends to collect, use, or acquire to facilitate the use of evidence in policymaking.
  • A list of methods and analytical approaches that may be used to develop evidence to support policymaking

The guidance also asks agencies to identify potential challenges to developing evidence – such as statutory restrictions – and to develop annual evaluation plans to implement the multi-year learning agenda, as well as to conduct an assessment of the agency’s ability to actually implement the learning agenda.

Avoiding a Compliance Exercise. The IBM Center report’s researchers note that greatest fear of proponents of evidence-based policymaking is that the learning agenda would become a compliance exercise to “placate oversight officials” and not be meaningful.  Similarly, they noted that there is the possibility that, if program officials were not engaged in the development process, “the substance reaches a level of abstraction that makes implementation difficult.”

Based on their observations of successful early adopters, such as the Department of Housing and Urban Affairs and the Small Business Administration, they found that developing and implementing learning agendas “requires participation from a range of stakeholder and internal programs staff.” They found that such participation grounds the agenda with insights about aspects of programs that could lead to short-term operational improvements as well as offer success stories that can demonstrate the value of evidence-building efforts.

The Small Business Administration, for example, collected feedback internally from program managers and externally from trade groups, think tanks and researchers. They constructed their plan around the four strategic priorities in its strategic plan, with long-term and short-term efforts to address questions such as: “What impact does lending have on long-term job creation, revenue growth, and export sales?”  SBA’s agenda also identifies research the agency intends to fund, the relevant databases that researchers could access for such projects, and relevant literature for reference by the evidence-building community

Emerging Practices.  Newcomer, Olejniczak, and Hart identify three emerging practices for developing an effective learning agenda:

  • Agency leaders and program managers need to identify and agree upon their agency’s key mission objectives and goals. If there is not a shared understanding about core mission objectives, it is difficult to agree on relevant research questions and priorities.
  • Staff and stakeholders have to be willing to participate in the learning agenda development process and commit to using its resulting activities in order to promote evidence-based decision making and learning within the agency. Developing a plan, and then not providing the resources to implement it, and not using the results to make decisions, is a recipe for a compliance exercise.
  • Agency leaders need to define the “unit of analysis” for which the agenda will be developed.Will it be organized based on: agency programs? agency organizational divisions? or broad policy outcomes? Will it be developed in conjunction with other federal agencies with related programs or desired policy outcomes, such as climate change? Will it be developed in conjunction with state, local, or other sectors, such as nonprofits attempting to address issues that require collaboration to solve, such as addressing the social determinants of health?

Sorting these issues out in advance of actually developing a learning agenda will make the resulting product more meaningful and actionable for both evaluators and decision makers.

Next Steps. Building on emerging practices, the research team identified a set of desired characteristics for learning agenda-building exercises.  These characteristics include elements such as ensuring they are user-oriented by including program managers as co-designers in the development process, making the development process both interactive as well as iterative, and ensuring grass root input to ensure the resulting evidence plans are grounded in information that actually exists or can be collected.

Newcomer, Olejniczak, and Hart also describe a seven-step process, using design sprint methods to develop a learning agenda that reflects the three characteristics above that make a learning agenda-building exercise effective. These steps are grounded in the importance of intentional and broad engagement of stakeholders.  The steps, for example, include developing a stakeholder map to ensure key players are identified, identifying key points in agency decision processes and their timing to ensure evidence is available when decisions will be made, and cataloging the needs of various decisionmakers.

They say that co-developing these elements of a learning agenda increase joint ownership of the result and can increase the likelihood of the agenda being used to support evidence-building that is relevant to decision makers. After all, the ultimate goal is to help program managers, agency heads, budget officers, the public, and the Congress answer the question: does it work? And if not, what’s next?