Why Evidence-Based Practices Don’t Work: Part I

 
Student Trio Classroom.jpg
 

Note: Here’s part three in our 2019-2020 Learning to Improve series which is the primary focus of the School Performance Institute (SPI) blog this year. In it, we spotlight issues related to building the capacity to do improvement science in schools while working on an important problem of practice.

To be clear, I am in favor of building a strong education R & D sector. However, it’s important to acknowledge the serious shortcomings of the current system. It is because of this current state that I am arguing that evidence-based practices don’t work.

I’m making two claims.  

  • Claim #1: The current evidence-base in education research is extremely thin at best and completely misleading at worse. 

  •  Claim #2: By their very design, studies that result in evidence-based practices discount externalities instead of solving for them. This is not helpful for educators working in real schools and classrooms. The idea that many interventions are effective in some places but almost none of them work everywhere is such a common idea in the education research sector that this phenomenon has its own name- effects heterogeneity.  

This post will address the first claim and Part II next month will address the second claim.

King Student Desk.jpg

Defining Evidence-Based Practices

School leaders tasked with improving student outcomes have an extremely complex series of decisions to make about the interventions and strategies they are going to use in order to bring about this improvement. These leaders are often encouraged to choose evidence-based interventions. By definition, evidence-based interventions are tools, materials, or sets of routines, typically grounded in theoretical principles, that have been subject to rigorous empirical study.  

There are two important assumptions here. The first is that the intervention has been subject to rigorous empirical study in the form of randomized control trials, but the reality is that many education studies do not meet this gold standard of research design. Second, even if the intervention has been subject to well-designed research, we only know that it can work because it has somewhere. However, it is very likely that the “somewhere” in the study is very different in key ways from where my improvement effort is taking place.

Misleading Evidence Base

An example using Ohio’s Evidence-Based Clearinghouse will illustrate my first point. Let’s say that I am an elementary principal working at a high-poverty school with a high percentage of struggling readers. I go to the clearinghouse in order to find a reading intervention program for my struggling students. The first thing that I have to do is understand the Evidence Level system on the site. Ohio’s Clearinghouse uses the system proscribed in the Every Student Succeeds Act which outlines three primary Evidence Levels for interventions including:  

  • Level 1 (Strong Evidence): At least one well-designed and well-implemented experimental (i.e. randomized) study.

  • Level 2 (Moderate Evidence): At least one well-designed and well-implemented quasi-experimental (i.e. matched) study.

  • Level 3 (Promising Evidence): At least one well-designed and well-implemented correlational study with statistical controls for selection bias.

These levels can be misleading. The Level 1 designation of Strong Evidence has more to do with the strength of the research design of the studies used to evaluate the intervention and less to do with the strength of the student outcomes of the intervention itself. I think this would come as a surprise to most people using the clearinghouse.

Thin Evidence Base

Let’s assume for a minute that as the elementary principal researching reading programs, I can get past the issue with the Evidence Levels. I start searching for elementary reading interventions that meet the Level 1 threshold. One of the programs that meets this criteria is Corrective Reading. On the Ohio Clearinghouse, there is a short description of the program and a note that additional information is housed on the Evidence for ESSA site. So, I continue on this trail to the Evidence for ESSA site and find that the Corrective Reading evaluation was based on a single study that included 206 students in urban and suburban areas surrounding Pittsburgh, Pennsylvania. Significant positive effects were found on the Woodcock Word Attack assessment (effect size = +0.15), and these qualified Corrective Reading for the ESSA “Strong” category. However, most other outcomes (including state reading tests) did not show positive outcomes, so the overall average effect size was only +0.06.

And then I move on to the single study referenced by both clearinghouses (Ohio’s and the Evidence for ESSA site) on which the Strong rating is based. Besides the evidence base relying on a single study with a relatively small number of students, there are a number of other very concerning issues with serious implications for the elementary principal which include:

  • The effects of the program did not carry over to outcomes with which schools would be most concerned (e.g. state test scores).

  • Teachers in the study received 70 hours of professional development which included three phases: intensive training, practice, and implementation. 

  • Three primary groups were studied: African-American, white, and economically disadvantaged. The program was only effective with the white students in the study.

So, if I am that elementary principal researching reading interventions, and I only used Ohio’s Evidence-Based Clearinghouse to identify my intervention, I likely would have grossly misunderstood the research underlying Corrective Reading. First, the program wasn’t effective in changing outcomes on meaningful assessments such as state reading tests. Second, implementation required significant amounts of teacher professional development, but this expensive training only produced small effects on an obscure assessment tool. And, those effects were only demonstrated with white students in the study and not with African-American nor economically disadvantaged students. 

A Better Way

There has to be a better way to help frontline educators identify effective interventions. As designed, Ohio’s Evidence-Based Clearinghouse is as likely to mislead and confuse as it is to help. Even under the best of conditions, strong studies that result in evidence-based strategies only answer the question- “what works?” As we saw in the case of Corrective Reading, it is debatable if even that question was answered.

But real educators in real schools need answers to an altogether different set of questions - “what works, for whom, and under what conditions?” This is the evidence of know-how, and the subject of Part II of this series.

SPI-Photo-002_Small group answering a question_2014_lr.jpg

More on How We’re Learning to Improve

School Performance Institute serves as both the improvement advisor and project manager for School-Based Improvement Teams working to improve student outcomes. Through an intensive study of improvement science as well as through leading improvement science projects at the four schools that make up United Schools Network, we’ve gained significant experience with its tools and techniques.  

We’re also opening our doors to share our improvement practices through our unique Study the Network workshops that take place throughout the school year. Our next workshop will take place at Columbus Collegiate Academy - Main St. in Columbus, Ohio on December 5th.


John A. Dues is the Managing Director of School Performance Institute and the Chief Learning Officer for United Schools Network. The School Performance Institute is the learning and improvement arm of United Schools Network, an education nonprofit in Columbus, Ohio. Send feedback to jdues@unitedschoolsnetwork.org.