by Molly Depasquale
In education, we’re obsessed with data. We want data about how much students have grown, what percent of the standards they’ve mastered, and how they’ll perform on the state summative. But how much of this data is really useful? What data do teachers and parents need to make decisions about how to help students learn?
Our obsession with data reminds me of something Dylan Williams says: “Weighing the pig doesn’t fatten it.” Has our focus on data made us lose sight of what is really important? Have we started to believe that data alone will help our students succeed?
Our recent i3 study cautioned against spending too much time looking at data and student work without committing to action. We found that too much data can actually be distracting and that spending too much time analyzing it is negatively correlated with outcomes for students.
Our preoccupation with data has also led to demand for different kinds of data, which the market has raced to meet. Assessments are available to “weigh the pig” in every way imaginable. How much does the pig weigh in comparison to other pigs? What should we expect the pig to weigh in the spring? How much has the pig has grown in the last couple months?
But which, if any, of these data actually help us take action that will result in increased student learning? A list of students who grew less than expected is not inherently useful. We need to prioritize data that tells us the most about what students need so that we can spend less time analyzing it and more time turning insights into action. True instructional assessments are designed with this purpose in mind.
Let’s change the conversation from “weighing the pig” to what can we do to “fatten” it! Let’s prioritize instructional assessments and spend more time reflecting on what students need so we can take strategic action to improve their learning. Let’s make sure we’re carving out time to use any data we collect. When we don’t, we are contributing to over-testing.
Data reports that enable assessments to be used for instructional purposes
Here’s a list from the Measures that Matter report, published by the Aspen Institute and Student Achievement Partners, of criteria for strong instructional assessments—those designed to provide information about where student understanding broke down so teachers can respond constructively.
An instructionally useful assessment must:
Offer a variety of item types, including constructed response and technology-enhanced items.
Consist of high-quality items clearly linked to grade-level content standards so teachers can easily identify which standards are in need of re-teaching.
Be a good fit within the curriculum pacing.
Provide qualitative insights about student misconceptions beyond just a numeric score.
Teachers must have access to the actual items students struggled with and examples of student work to guide their data analysis.
Distractors for multiple choice items should be well-crafted and tied to common misconceptions so teachers can easily identify student misconceptions based on student distractor choice.
Be coupled with PD for teachers on corrective steps to address student misconceptions.
Produce clear, accessible reports and offer quick data turnaround.
Molly is ANet’s managing director for program development. She has 15 years’ experience as a middle school math teacher and instructional leader, including as a master educator for DC Public Schools and an ANet coach.