It’s no surprise that teachers all across the country are eager to learn and grow in their practice—and professional learning plays a critical role in preparing them to deliver high-quality instruction. But measuring the impact and effectiveness of professional learning is often easier said than done.
That’s why New York City Public Schools, the largest school system in the nation, has embraced improvement science, an innovative method for implementing and measuring success. And as one of 20 districts and charter organizations participating in Learning Forward’s Redesign PD Community of Practice, New York City is making major improvements in the way it supports teachers and students.
We spoke with Julie Leopold, Executive Director of Instructional Policy at the New York City Department of Education, about her district’s experience with the Redesign PD Community and the improvement science approach.
Professional learning is a challenging area for a lot of districts. What are the main challenges that your district has faced with professional learning?
The challenge that we were tackling through the Learning Forward Redesign PD Community of Practice was finding a way to measure the impact of professional learning. Oftentimes, people come to a professional learning event, and it’s easy to measure the reactions in the room. But what’s hard to do is to measure whether they actually took what they learned back and applied it in the classroom. And the biggest question is, did it positively affect students?
We focus our time with the Redesign PD Community of Practice on the change-in-practice piece. We said, “Let’s have faith that if they change their practice, they’ll check whether it has an impact on students.” So we were trying to get better at checking for impact on classroom practices.
What drew you to take an improvement science approach to address that issue?
We believe in empowering teachers to solve problems in their classrooms. So we’ve asked teachers to really think about their impact on students and what’s getting in the way, and to be really cyclical in regards to the different changes they can try that will affect students.
The emphasis with improvement science is on small change and rapid cycles, and we felt like that was a good remedy for the schools that were spending a lot of time researching problems but not actually taking action. Improvement science encourages you to take action and doesn’t emphasize formal measurement as much. It makes teachers focus on how they know what’s working in their classroom. So we spent a lot of time talking to schools about the three questions that improvers ask, using an improvement science approach: What’s the problem you’re trying to solve, what’s the change you’ll try and why, and how will you know if it’s working?
Can you tell us about some examples of the improvement science approach in practice?
One of the schools that my team worked closely with this past year knew that students’ scores on English Language Arts standardized tests weren’t where they needed to be. They were able, through an improvement science approach, to really narrow that down and say, “We don’t have students able to sustain their reading for significant periods of time, and we need to change that.”
They figured out that if they could break through there, it would open up their performance in all subject areas, across all grade levels. When it comes to improvement science, you want to address problems that are, as Jonathan Kozol says, “big enough to matter, small enough to win.” So it feels do-able to tackle reading stamina.
They tried a few different things—first with three students, then six, then the whole class— and eventually they were able to show a real impact on how long students were able to sustain their reading.
How has being part of the Redesign PD Community of Practice helped you and your team do this work?
It has kept us focused on making sure that we’re focused on measuring impact, because that’s our major problem of practice in our district. The community also has helped us make sense of what we’re learning. It gave us a really helpful framework in the beginning that enabled us to organize around what we were already measuring, what we weren’t measuring, and what we wanted to get better at measuring.