Do Tiered Evidence Grants Work?

 

Do Tiered Evidence Grants Work?

Tuesday, July 11th, 2017 - 9:22
Results for America is a leading proponent of distributing $600 billion in federal grants to recipients based on whether the grant programs work or not.

It recently released a nine-point agenda recommending actions that Congress can take to ground funding decisions on this basis.  One of its recommended actions is to expand the use of “tiered evidence” grants.

But do they work?

The Government Accountability Office (GAO) examined this relatively new grant-making approach in 2016.  According to GAO: “Under this approach, agencies establish tiers of grant funding based on the level of evidence of effectiveness provided for a grantee’s service model. Agencies award smaller amounts to promising service models with a smaller evidence base, while providing larger amounts to those with more supporting evidence.”

GAO further noted: “Proponents of tiered-evidence grants contend that they create incentives for grantees to use approaches backed by strong evidence of effectiveness, encourage learning and feedback loops to inform future investment decisions, and provide some funding to test innovative approaches.”

Results for America says there are five authorized federal evidence-based innovation programs that use a tiered-evidence approach: the Social Innovation Fund; the Education Innovation and Research program; the Teen Pregnancy Prevention Program; the Maternal, Infant and Early Childhood Home Visiting Program; and Development Innovation Ventures. 


Patrick Lester, who heads the Social Innovation Research Center, completed an evaluation of one of these five programs for the IBM Center to determine if this approach makes a difference.  He examined the impact of 44 of the individual grants made under the Education Innovation and Research (EIR) grant program in the Department of Education where formal evaluations had been completed (out of 117 grants made in the early years of the program).  This program, initially dubbed the “Investing in Innovation (i3)” program, has provided over $1.4 billion in grants for education projects over the past eight years. These projects focused on improvements in turnarounds of low-performing schools, decreasing dropout rates, and increasing student achievement (such as reading scores).

The Case of EIR Tiered-Evidence Grants.  The Department of Education adopted the use of tiered-evidence grants to pilot innovations in elementary and secondary education in 2009, and the current administration has committed to continue building on this approach.  The Administration’s fiscal year 2018 budget proposes funding the existing Education Innovation and Research program at a level of $370 million, including resources to build evidence on the effectiveness of different approaches to school choice programs.

According to Lester, the i3 / EIR program “is part of a larger national effort to build and increase the use of evidence-based programs and practices in education.” This broader effort includes substantial investments in both education research as well as the dissemination of research findings to national, state, and local educators and policymakers.

Through its grants, the program has attempted to create a “pipeline” of projects, with each operating at one of three different tiers of development – early-stage innovations, mid-level programs with some evidence, and initiatives with substantial evidence that should be expanded nationally. Each of these tiers is addressed by one of three different types of program grants – development, validation, and scale-up.

In 2015, Congress updated the program as part of the Every Student Succeeds Act, which adds eligible applicants to the program, but keeps most of the essential features of the original program.

Progress in EIR Research Priority Areas.  To qualify, grant applicants had to apply in one of the Department of Education’s chosen focus areas, which were selected based on the need for greater evidence of what worked in that area. The topics differed from year to year. For example, in 2010 there were four priority areas: (1) teachers and principals; (2) data use; (3) standards and assessments; and (4) school turnarounds.  

In Lester’s review of the program evaluations for supporting effective teachers and principals, he concludes: “the poor overall performance in this category may not be surprising.  Previous studies have suggested that most teacher professional development is ineffective.”

In terms of the priority area of school turnarounds, he looked at an initiative called Diplomas Now, launched by Johns Hopkins University with a 2010 validation grant. Lester writes that the initiative is in 32 middle and high schools, with the goal of increasing high school graduation and college readiness.  He notes that “While the project is not yet complete . . .  its early results are still noteworthy. . . after two years it reduced the percentage of students exhibiting one or more early warning signs that a student will drop out, including poor behavior, low attendance, or poor academic performance.”

Interestingly, he found: “The highest success rate of all was experienced by grantees that were focused on reading and literacy.”

Observations Based on Types of Grant Recipients.  Grant recipients could be local school districts, universities, non-profits running charter schools, etc.  They received grants for development, validation, and scale-up.

Lester found that local school districts: “appeared to have some advantages and disadvantages compared to the other grantees. Advantages were tied to their location in the schools. They were often closer to the work, had an easier time with buy-in, easier access to data, and budgets that could sustain a program if it was working . . . Disadvantages included lower capacity in some critical areas of expertise (especially in evaluations), less specialization or experience with the chosen intervention, and a lack of direct access to national experts.”

The scale-up grantees – KIPP Charter Schools, Teach for America, Success for All, and a Reading Recovery program launched by Ohio State University – expanded their evidence-based programs, and as a group, “performed better than local school districts” that also received grants but worked independently.  Lester concludes: “. . . . strong intermediaries may be needed to successfully scale evidence-based programs in low-performing schools.”

Front Line Factors for Success or Failure.  Lester tentatively identified ten factors that may explain why some individual projects succeeded while other failed.  These include:

  • Difficulty of program objectives. Projects that were too ambitious, such as whole school turnarounds, tended to fail.
  • Experience with the chosen intervention technique. Organizations with experience tended to be more likely to succeed.
  • Genuine buy-in from the schools. “You want a stable and supportive school district that is willing to make changes when changes are requested,” said one grantee. “The odds of making major change are low if you don’t bring in advocates among the teachers and others on the ground,” said another. “Innovations are too easily blocked otherwise.”
  • Strength of evaluation designs. Poorly designed or implemented evaluations or data access issues was a driver of poor results overall.

Overall, Lester’s study found that “a higher percentage of the program’s scale-up and validation grants, which required more evidence, have produced positive impacts (50 percent).  A smaller share of development grants, which required less evidence, did so (20 percent).”  He concludes, with some cautions, that “these rates of success appear to exceed those in other areas of education research.”

Recommendations.  GAO’s report was relatively optimistic about the value of the tiered evidence approach.  It recommended that OMB create a cross-agency working group to “broadly disseminate information or leading practices and lessons learned” on tiered evidence grant programs.

In addition, based on his reviews of individual grants within the EIR program, Lester recommends a number of steps to be taken to fine tune its operations, such as:

  • The EIR program should support faster research; “six years is too long to wait for results”. . . This could be done “by providing more grants to programs that already have operations underway in multiple schools.”
  • EIR should be better connected to other publicly-funded education programs because “successfully scaling evidence-based program may require that involvement of high-capacity intermediaries . . . . Every Student Succeeds Act has laid the groundwork for increased use of evidence through several of its provisions.”

In general, the tiered-evidence approach seems to be a promising approach with bipartisan support.  It will be interesting to see if GAO’s recommendation to create a community of practice takes hold as the administration seeks out new reform ideas.