Back to Resource Center

Lessons from our i3 study

In 2008, Kim Marshall wrote a “user’s guide” to interim assessments, pointing out promising practices for using these assessments to improve teaching and learning. Like so much of his writing, the user’s guide was hugely respected by educators because it was thoroughly grounded in the real experiences of schools and it contained guidance that was both well researched and practical. Marshall also pointed out many “common glitches” of interim assessments. As he said in his paper, “As I’ve watched well-intentioned, hard-working educators make these mistakes, I’ve realized that interim assessments are a lot harder to implement well than a lot of us thought.”

We know what Kim meant.

Over the last 10 years, more than 800 schools have partnered with Achievement Network (ANet) to help them implement practices that embed the routines of standards-based planning and using data from interim assessments to support strong teaching and learning. We’ve worked to address each of these “common glitches” in our model of support to schools and to ensure the practices make a positive difference for leaders, teachers, and students.

In 2010 we received a “development” grant through the Investing in Innovation (i3) program of the Department of Education. Grants at the development level were meant to give organizations that were starting to see results a chance to try out new innovations. For us, this meant testing our unique combination of tools and training, which was working in the charter world, with district partners.

As part of the grant, the Center for Education Policy Research at Harvard (CEPR) conducted an independent evaluation of our program. CEPR created pairs of schools where one school was randomly assigned to work with ANet for the next two years, while the other was assigned to a control group that implemented interim assessments without ANet’s help. In addition to comparing changes in educator practice and student achievement between the two groups, we wanted to learn more about the conditions under which our model could best support schools. We felt that the study could help educators (including our own staff) more consistently overcome Kim’s “glitches.”

CEPR has finalized the study, and with the research complete, there is a wealth of findings we are learning from. First, the basics. Results reinforce Marshall’s overarching observation: schools benefit from using standards and data. Student proficiency rates improved on average across the schools in the study over the evaluation time period—whether they took their own approach to using interim assessments or received help from ANet. It was also gratifying for us to see that schools that partnered with ANet made statistically significant improvements in nearly all of the important practices we help schools develop and that many of those practices were correlated with increases in student achievement. Many schools we supported made significant student achievement gains above and beyond the schools taking their own approach to interim assessments. Yet many schools is not all schools, and taken as a whole, schools that worked with ANet in the study performed similarly to the control group.

As it often has, Marshall’s work inspired us to look more closely at the data. For us, many schools is not enough. We want to do right by all schools, and so we dug into the data to understand: why did some schools make significant improvement and others did not?

We’ve shared three big lessons we are taking from CEPR’s evaluation below. ANet is an organization that thrives on learning. Like our school partners, we consistently use our data to understand how we can get better over time. By clicking on each lesson below, you’ll be able to access a deeper discussion of that lesson, as well as what we are doing about it.

TAKEAWAY 1: Schools deserve differentiation the same way students do.

ANet schools in which the right “readiness” conditions were present significantly outperformed their matched pairs in student achievement gains in both math and ELA. This translated roughly to an additional four to seven months of learning over the two-year study period for students in those schools. Click here to learn more about what those conditions were and what we are doing to meet schools where they are at in order to maximize the effectiveness of our support.

TAKEAWAY 2: It’s no wonder educators can find data and assessment distracting—they can be.

Schools that conducted more frequent analysis of data—and then used that analysis to shape their instruction—made greater gains than schools that analyzed and used data less frequently. However, schools that analyzed data more frequently without using that data to shape instruction were, if anything, less effective in raising student achievement. Click here to learn more about this finding and what we are doing to help schools move from analysis to application for improving instruction.

TAKEAWAY 3: Teachers matter most, but the rest of us can give them a tailwind.

Research shows that teacher effectiveness is the most important in-school factor influencing student achievement. But we learned that helping district leaders provide coherent support for high quality instruction can help create “tailwinds” behind teachers for more student learning. Click here to learn more about how working more closely with district leadership helped us create the conditions for better teaching and learning and what we are doing going forward.

Get k12 Education Resources

Subscribe to Our Newsletter

Subscribe to our newsletter to join our community and receive monthly selections of actionable resources, stories of best practices from across our national network of partner schools, districts and CMOs, and invitations to exclusive events. We're glad to be learning together alongside you.