In last week’s post, I talked about the questions statewide assessments cannot answer and why asking them leads to more confusion than clarity.
This is the other half of that conversation.
The results from these tests can be useful. Very useful in fact, but only when we ask them the right questions.
Statewide assessments are not microscopes. They are not designed to tell you exactly what happened in a classroom on a Tuesday in October. They are designed to show patterns across groups of students over time. They are system-level indicators of how things are going.
Once you understand that, the way you use the results changes.

Start with Trends, Not Moments
One of the most important shifts is also one of the simplest.
Stop asking what happened over the past few months.
Start asking what is happening over the past few years.
These assessments were not built to confirm short-term changes. They are not especially sensitive to new initiatives that have not yet taken hold. They are designed to show whether sustained effort is making a difference across a system.
The signal you are looking for is not immediate. The signal will be gradual but directional.
When you start looking for that kind of signal, you begin to see something meaningful.
A Story About Writing Across a School
Consider a high school that decides to focus on writing. Not just in English classes, but across all content areas.
- Students are writing in science to explain their thinking before running an experiment.
- They are writing in history to build arguments and analyze perspectives.
- They are writing in mathematics to justify their reasoning.
- They are writing in music and art to reflect, critique, and explain.
This is not a short-term initiative. It is a shift in how students engage with content every day, across classrooms.
Now fast forward two or three years.
When those students take a statewide assessment that includes writing, the results begin to reflect that shift. Not as a dramatic spike in a single year, but as a pattern:
- More students can organize their thinking.
- More students demonstrate proficiency.
- Fewer students struggle to communicate their ideas through writing, especially under time constraints.
The assessment cannot tell you why that happened.
It can only show you that something changed across the system.
A Story About Mathematics and Staying the Course
Now consider an elementary school implementing a new mathematics curriculum with a focus on foundational concepts like fractions and factoring.
The implementation is thoughtful. Teachers are supported. Instruction is aligned to the level of rigor expected at each grade level. Plus (this is the part that matters most!) the focus is sustained for more than a year.
This is where many systems lose momentum.
After a few months, the question becomes: Is it working?
Then when results do not immediately reflect a change, the instinct is to shift direction.
Large-scale assessments are not designed to pick up changes that happen over just a few months.
After a full year or more of consistent implementation, the results can begin to show whether students are developing stronger understanding. You may see gradual improvement. You may see fewer students struggling with foundational ideas.
That is not instant feedback, because it is designed as system-level evidence.
System-level changes only shows up when the work is sustained long enough to matter.
This Only Works If the System Holds Steady
There is an important condition underneath all of this:
These assessments can show meaningful change across a system only if there is actually a consistent shift happening across that system.
When priorities change every few months, or implementation is uneven, or the focus fades before winter break (or worse, all of these things happen) then there is no clear signal to detect.
That is not a failure of the assessment. It is a reflection of the system.
Sustained focus is not just a nice idea. It is what makes measurable change possible.
Look for Direction, Not Drama
When results arrive, the question is not whether there was a dramatic jump. Despite expectations for huge gains, dramatic changes in large groups of students is not likely.
The question is whether there is movement in a consistent direction.
Small changes (up or down) of less than a percentage point or two are noise. That is not where meaning lives.
Instead, ask:
- Are more students demonstrating proficiency over time?
- Are fewer students in the lowest performance levels?
- Are gains holding from one year to the next?
These are the indicators that signal whether something is actually shifting across the system.
The Timeline Problem We Don’t Talk About
Here is where this gets uncomfortable.
Most school and district leaders are expected to show improvement within a single school year.
However, most meaningful instructional shifts do not fully take hold within a single school year. Real shifts may take up to three years to yield measurable results.
That’s just too long to wait for policy makers, so the pressure remains.
A new curriculum is adopted.
Another shift in instructional practice is rolled out.
These take time to implement consistently across classrooms. Even more time for that consistency to show up in system-level results.
At the same time, large-scale assessments are not designed to detect early, localized change. A strong implementation in one grade level, one department, or one strategy may matter deeply for students but it may not yet be visible in statewide results.
So we end up asking the wrong question at the wrong time: Why aren’t we seeing results yet?
When the better question is: Have we implemented this well enough, long enough, and consistently enough for results to show up at the system level?
Using Direction to Make Real Decisions
This is where direction over time becomes powerful.
When you see consistent movement across years, across cohorts of students, that is evidence of measurable changes.
Evidence to:
- Stay the course
- Protect the focus
- Invest more deeply where the work is taking hold
Not because the change is dramatic, but because it is real.
Just as important, when there is no clear direction over time, that is also information. It tells you to look more closely at implementation, alignment, and support before changing strategy entirely.
This is how results become useful beyond the classroom.
They provide system-level evidence to support continued focus, continued investment, and continued alignment to what matters most.
Ask Better Questions Before Results Are Available
Have you been in one of these meetings?
Charts on the table. Markers out. Facilitator says, “Let’s explore the data together.”
People stare at results that moved a point or two (if at all) and are asked, “What does this tell us?”
That’s not leadership. That’s not direction. This is not data-informed anything. That’s a waste of time.
If there is no clear framing, the conversation will go nowhere. If there are strategies brainstormed, they will be replaced by the next new initiative anyway.
So set the meeting up differently.
Start here:
We prioritized X
so that we could shift Y
and this assessment is designed to measure ZSo, we would expect to see movement in Z over time.
Now the question becomes: Do we see evidence of that anticipated shift?
If yes, stay the course and keep investing. Double down on efforts, and celebrate the win!
If not, do not jump to the test. The test is just the measuring tape. It tells you how far the system traveled. It cannot tell you why.
If the results are not showing the shift you expected, the break in the chain is almost always in implementation:
- Was the strategy actually used consistently?
- Was it supported well enough for teachers to implement it as intended?
- Was it sustained long enough across classrooms to matter?
This will produce a very different conversation. It is focused, actionable, and it respects both the limits of the data and the reality of the work. Plus, don’t be surprised if even the most historically disengaged teachers actually engage with these questions!
Use the Data for What It Is Designed to Do
At the center of all of this is alignment.
Alignment between what you prioritized, what was taught, and what is being measured.
When those things are aligned, statewide assessments can do exactly what they are designed to do. They provide credible, system-level indicators of how things are going.
Not everything, and not immediately, but meaningfully.
The question is not whether you have data. You will get results. The question is whether you are using that data with purpose.
When you ask better questions that are grounded in your actual work, these test results stop being something you react to.
They become something you use as a real scoreboard in a game that is winnable (for the first time for some educators!).
That is the difference between data that sits in a report and data that supports better decisions.
Otherwise, we are just playing with numbers and performing data theater.
Everyone in the room knows the difference.


Leave a Reply