Lessons in Improvement

 
Teacher+Student_Desk.jpg
 

Note: Here’s part eleven in our Learning to Improve series which is the primary focus of the School Performance Institute (SPI) blog this year. In it, we spotlight issues related to building the capacity to do improvement science in schools while working on an important problem of practice.

Earlier this year, I kicked off our Learning to Improve blog series by discussing seven early lessons from our improvement science work at United Schools Network. The good thing is that I think that all of those lessons were on point. But, we’ve also learned a great deal about using improvement science methodology through our 8th Grade On-Track project this school year. In this post, I will expand on those early lessons and present some of our new learning.

Student Reading_Single.jpg

Lesson #1: Identify an advisor for your improvement project.

Having an improvement advisor alongside you during complex improvement projects is critical. It’s a lot like having a coach by your side as you work on professional goals or like having a trainer by your side as your work on fitness goals. School leaders have incredibly demanding schedules, and they have limited time to sift through the often contradictory research reports in our sector. The advisor has specialized knowledge in improvement techniques, helps to track your progress on key measures, and performs a number of project management functions. No easy way exists for school leaders to figure out what will work to improve outcomes for their students. This is why a skilled improvement advisor can be such a useful resource.

Lesson #2: Improvement science work is a learning journey, not a one-time workshop.

The methodology involves going on an improvement journey. It’s not a one-time workshop or a top-down change mandate. Instead, it’s an effort to increase the capacity of an organization to produce successful outcomes reliably for different groups of students, being educated by different teachers, and in varied contexts. This focus on meaningful long-term outcomes is very different than the typical reform efforts that often only result in short-term increases in standardized test scores.

Lesson #3: This work requires a culture of improvement.

Improvement requires experimentation and occasional failure. At the beginning of our 8th Grade On-Track work, we introduced our team to a core ethos of improvement science which is that our ideas are probably wrong, definitely incomplete, especially early on. We first saw this phrase used during an improvement summit hosted at the Carnegie Foundation for the Advancement of Teaching. Improvement science cautions for humility when considering how much must be learned in order to get a change idea to work in practice reliably. These ideas run counter to our sector’s accountability driven culture which disincentives experimentation and learning from mistakes.

Lesson #4: Start small, learn fast.

One of the counterintuitive ideas of improvement science is that while the goal is to improve outcomes for all students, the work often starts with a single teacher, student, or classroom. USN’s 8th Grade On-Track project started by focusing on individual off-track students in two classrooms at Columbus Collegiate Academy. Starting small allows teams to manage the complexity inherent in educational improvement work. Only after change ideas are tested and data verify their effectiveness under a variety of conditions are they considered for spread to other students, classrooms, grade levels, and schools.

Lesson #5: Equip School-Based Improvement Teams (SBITs) as the drivers of improvement.

All attempts to improve schools are social and human resource intensive activities. The critical question for these activities is not “What works?” but instead “What works, for whom, and under what set of conditions?” We believe that School-Based Improvement Teams working with an improvement advisor are in the best position to answer these questions. When deliberately assembled and trained in improvement science methodology, these teams can function as the engines that drive improvement work within schools. School-Based Improvement Teams are largely made up of teachers and other school-based staff who serve the students on which the improvement work is focused. They are asked to be researchers who engage in rapid cycles of learning in which change ideas are tested under the conditions and with the students that they would ultimately be used.

Lesson #6: Plan-Do-Study-Act (PDSA) cycles are the fuel that make those SBIT’s engines run.

If School-Based Improvement Teams are the engines that drive improvement, PDSA cycles are the fuel that make those engines go.  Each PDSA cycle is a mini-experiment designed to test a change idea. Observed outcomes are compared to predictions and the differences between the two become the learning that drives decisions about next steps with the intervention. The know-how generated through each successive PDSA cycle ultimately becomes the practice-based evidence that demonstrates that some process, tool, or modified staff role or relationship works effectively under a variety of conditions and that quality outcomes will reliably ensue.

Lesson #7: There’s a significant difference between goals for accountability and goals for improvement.

Far too often, these two types of goals get conflated during school improvement projects and this can have unintended consequences. Goals for accountability measure end of the line outcomes with the purpose of identifying exemplary or problematic teachers, schools, or districts. They have limited use for improving practice because the data are not tied to specific practices and results are typically reported after the school year has concluded. On the other hand, goals for improvement measure work processes that are the object of change with the purpose of determining whether an educational change is an improvement. They have formative value and data is shared in low-stakes, safe environments conducive to change, but because a premium is placed on practicality there can be challenges with the efficient collection and rapid reporting on data.  

Too often, we look to accountability tools such as Ohio’s School Report Cards as if they provide a road map to improvement. That’s simply not the case. It’s not that one type of goal is better than the other, but it’s critical to know that they serve different purposes.

Teacher Student_Lab2.jpg

More on How We’re Learning to Improve

With funding support from the Martha Holden Jennings Foundation, School Performance Institute spent the past school year partnering with Columbus Collegiate Academy to improve its high school readiness rates using improvement science methodology. SPI serves as both the improvement advisor and project manager for School-Based Improvement Teams working to improve student outcomes. Through an intensive study of improvement science as well as through leading improvement science projects at USN schools, we’ve gained significant experience with its tools and techniques. If you are interested in learning more about our improvement science work, please email us at jdues@unitedschoolsnetwork.org.  

We’re also opening our doors to share our improvement practices through our unique Study the Network workshops that take place throughout the school year. The workshop calendar for the 2019-2020 school year will be shared later this summer.

John A. Dues is the Managing Director of School Performance Institute and the Chief Learning Officer for United Schools Network. The School Performance Institute is the learning and improvement arm of United Schools Network, an education nonprofit in Columbus, Ohio. Send feedback to jdues@unitedschoolsnetwork.org.