Back to Resource Center

The pitfalls and disappointments of item banks

We regularly hear from principals and district leaders who are thinking about using item banks to create formative and interim assessments for their schools.

There are a lot of reasons school leaders turn to item banks. They like the low price point. They think they’ll gain the ability to create short assessments that teachers can use to understand student mastery in real time. They imagine flexibly creating materials customized to what’s being taught. And they’re often impressed by the slick interfaces that item banks offer for assessment creation and data analysis.

In our experience, few of these potential benefits ever pan out. Several of our school partners have told us that their experience with item banks didn’t live up to their expectations. Here are a few reasons why:

Item banks seem cheap, but end up extremely costly if you consider staff time.

Here’s an example: if a mid-sized district or CMO of 8 schools had one person in each school spend just two days per month creating assessments from an item bank, it would total the equivalent of a full-time employee for the year. That could represent an investment of $100,000 before accounting for any fees associated with the content itself.

Choosing items that give insight into student mastery is hard.

Item banks advertise the number of items they contain as a point of pride, with many providers boasting 60,000 or more. That number seems great — lots of options! — but when a teacher actually sits down to create an assessment for a math standard she has recently taught, it can be overwhelming.

The task of choosing just three or four items from a bank of 60,000—while ensuring that those items capture the whole breadth of the standard and reveal the specific struggles students are having—is more than most teachers want to tackle. They’d rather spend their time teaching.

Alignment with what’s been taught is not the same as alignment with the standards.

What are formative and summative assessments?

We’ve had many school leaders say to us, “When I look at the assessments that we create, our students are doing well on them throughout the year; but then we get to the summative and our students struggle. What’s going on?” Creating assessments that align to what’s been taught is important, but it can set up school leaders, teachers, and students for disappointment at the end of the year if the items they select don’t also reflect the true expectations of the standards.

Reports from item banks encourage evaluation, not reflection.

Educators choose item banks to create their own assessments because they feel that customized assessments will help inform instruction better than off-the-shelf assessments. However, item banks often return the results of these assessments in the form of a “red, yellow, green” chart. These charts don’t tell teachers anything about why students are struggling or how to help them. They simply rank students, and so they are not actually serving the instructional purpose that most educators have in mind when they decide to develop their own assessments. These reports can create an atmosphere of evaluation, rather than genuine space to reflect and improve.

We share these thoughts not to completely dismiss item banks. They can be useful for multiple purposes (such as exit tickets or other short, formative assessments) when teachers receive the right support. But they seldom live up to the expectations educators have when using them to create interim assessments. 

Concerned about over-testing?

Download our new white paper on refining your assessment strategy.

Get k12 Education Resources

Subscribe to Our Newsletter

Subscribe to our newsletter to join our community and receive monthly selections of actionable resources, stories of best practices from across our national network of partner schools, districts and CMOs, and invitations to exclusive events. We're glad to be learning together alongside you.