ANet schools in which the right “readiness” conditions were present significantly outperformed their matched pairs in student achievement gains in both math and ELA. This translated roughly to an additional four to seven months of learning over the two-year study period for students in those schools.
So, what do we mean by “readiness” and what were these conditions?
Before the i3 study began, we asked schools interested in participating to complete a questionnaire to gauge their readiness to partner with ANet. We were very structured about this—we didn’t want to identify schools with especially skilled educators or that were “higher capacity” as it is often referred to in the literature. Identifying these schools would not help us with our mission to help all children. Rather, we wanted to know what conditions needed to be in place for schools to benefit most from our support.
Specifically, we asked: 1) whether or not integrating standards and data more fully into teachers’ planning was part of a clear, manageable set of priorities in each school, 2) whether or not the school had set aside time for its teachers to make meaning of the data and take action together, 3) whether the school had established a leadership team to guide and lead the work, and 4) whether the district supported this priority and was aligned on the school’s need for the partnership with us. Here’s a link to the detailed rubric we used.
All of these conditions are in line with the guidance in Marshall’s “user’s guide” and likely seem intuitive to many people. But we often ask schools to take on the use of assessments when these conditions are not in place. Just take that first one as an example - how many times have interim assessments been added as number four or five on an already-too-lengthy list of priorities at a school? How many times have they been added to a school’s to-do list without asking teachers and school leaders whether or not they fit their priorities?
For us, the finding that certain schools with certain conditions benefit most from our support confirmed a truism that we all know but that we don’t always live by: there are no silver bullets. Providing each school with the same thing won’t get the same results everywhere, and we shouldn’t expect or require the same changes in all schools. As we’ve worked alongside schools since our founding, we’ve seen that each school is different in terms of people, priorities, and place in time. And we see that we’ve got to differentiate our support for schools—just the way that educators do for their students.
These findings from our i3 trial have galvanized us as an organization to get clearer internally, and with our system partners, about WHEN a school is best positioned to focus on this work and HOW we all build the conditions to help schools improve. We owe it to the teachers in the schools we partner with and ultimately the students we serve. We’re already working on moves organizationally to ensure we learn from this finding that we wanted to share:
Take more time to understand each school before we partner. This means seeing classroom teaching taking place. It means meeting with both leaders and teachers to understand their priorities and past successes and areas for growth. It means ensuring the school understands what our partnership entails, from a time and people standpoint as well as culturally. These things matter because they help us both recognize whether or not the right conditions are in place for this work to be effective. We have started using these questions to help us do an even better job understanding each school, aligning with the school on what we see, and understanding how we can best help the school given its context.
Give schools a partnership that aligns to their areas of prioritization. We hear all the time that effective interventions should be implemented in all schools at once. If it works, the logic goes, then replicate and scale it. This concept is often well-intentioned: “We believe in equality,” many of our partners say to us. “Every school should be using ANet’s assessments because they offer the rigor we need and are designed first and foremost to give teachers the information they need. But we’ve learned that schools don’t need equality—they need equity. Equality means giving every school the same thing. Equity means giving each school what it needs. For us, this has come to mean that the first year of our partnership might focus on planning from standards and putting in place the right PD for teachers to internalize the Common Core, rather than use of our interim assessments. In fact, at the beginning of our i3 trial upwards of 90% of our coaching time with school leaders was focused on use of interim assessment data. Today, we differentiate our coaching much more to meet the needs of the school, and only about 40% of our time is focused on helping partners use data and assessments, while 60% is focused on helping them build expertise in standards, set priorities, support teacher development, and other needs specific to the context of the school.
Build the conditions for success when they aren’t in place. These findings do not make us think we’re going to work only with certain schools that have all these conditions in place. If we did that, it would feel like we were “creaming.” It would also feel like we weren’t standing behind our conviction that helping many schools isn’t enough—we want to help all schools. So rather than partner only with schools that have the right conditions in place to begin with, we are starting to work with school leaders and system staff to put these conditions in place where they are missing. Sometimes this means figuring out how to set aside consistent collaborative time so educators can focus on this work together. Sometimes it means working with system and school-based staff to ensure alignment between curriculum, instruction, and assessment. We still have a lot to learn here and we will study this closely alongside our school partners: will our efforts to help establish these conditions lead to better results when we deliver our full partnership? Or, will we learn that sometimes these conditions are better developed by schools, systems, or other partners without ANet? If we learn from our data the circumstances where others are better suited to help put the conditions in place, can we know enough about what’s available to recommend the right strategies?
At the end of his user’s guide, Marshall pointed out something that our team often encounters in school cultures when we begin our work with them: more focus on supervision of process than on purposeful implementation. None of us wants to see schools miss out on promising ideas that we know can help students. But when we try a one-size-fits-all approach to school support, without regard to the priorities or context of the school, we end up with process over purpose. The findings from our i3 trial are helping us communicate better with our partners about the conditions under which purpose can win out over process.
—The ANet team