✳️ The Wrong Exam: When the System Stops Measuring What Matters
- Nov 4
- 4 min read
Two exam headlines have dominated Australian education news in recent weeks.
In Queensland, Ancient History students discovered they’d spent the year studying the wrong topic. In Victoria, English students were blindsided by a random, unattributed quote that left even their teachers confused.
For the students involved, these weren’t just slip-ups. They were moments that revealed something most of us already know - that the system we’ve built to measure learning is no longer fit for purpose.
Because maybe the real mistake isn’t the wrong question on the paper. Maybe the mistake is believing there’s ever been just one right question at all.
Uniformity Disguised as Fairness
External exams are sold as the great equaliser - everyone sits the same test, under the same conditions, marked to the same standard. Fair, right?

Except fairness isn’t sameness.
Why should every Ancient History student explore the same period of time in the same way? Why should every English student interpret a text through a single quote, written response, and word limit?
Somehow, we’ve confused control with credibility.
A system obsessed with uniformity leaves no space for learner voice, agency, or choice.
There’s no room for students to show how they think, create, or connect, only how well they can conform. As Deci and Ryan’s Self-Determination Theory reminds us, autonomy and purpose aren’t luxuries; they’re the foundations of motivation and deep learning. When we strip them away, we also strip away curiosity.
In our pursuit of fairness, we’ve designed an experience that’s anything but human.
The Big Lie: Exams as the Ultimate Proof of Learning
We’ve built an entire industry on the belief that external exams represent the pinnacle of academic achievement - that what happens in a silent room for three hours can somehow capture twelve years of learning.
But what are we really measuring? Speed. Memory. Compliance.
Researcher Guy Claxton argues that exams reward a narrow slice of cognition: recall and regurgitation. They measure performance under pressure, not the capacity for critical thinking, collaboration, or creativity. Yong Zhao calls this “the tyranny of testing" - a system that standardises young people into predictability instead of empowering them to be original.
We say we want lifelong learners, but we judge them with the world’s shortest-term tool.
The Human Cost
Behind every exam paper is a student who’s been told for years that this moment defines them. Teachers who spend months preparing students for “the format.” Parents who hold their breath until results arrive.
The Australian Psychological Society reports that nearly two-thirds of Year 12 students experience high or extreme stress in exam periods. But beyond the anxiety is something deeper, a quiet disengagement. The love of learning replaced by the fear of failure.
In the name of objectivity, we’ve erased individuality.
And when the inevitable system error happens — a misplaced quote, a wrong topic, a flawed rubric — we act surprised. Yet the truth is, these aren’t accidents. They’re the natural by-product of a model designed for efficiency, not growth.
The System is Broken, But the Solution is Here
Reforming assessment doesn’t mean lowering standards; it means redefining them. Around the world, schools and systems are proving that rigour and relevance can coexist.
So what might meaningful assessment look like?
Learner Voice and Choice: Students co-design aspects of how they demonstrate understanding, such as through exhibitions, multimedia, oral defences, or written analysis. Because understanding isn’t proven by sameness; it’s proven by depth.
Continuous Assessment Frameworks: Teachers gather evidence of learning over time, projects, reflections, performances, and moderate collaboratively. This doesn’t just measure learning; it supports it.
Learner Portfolios: Ongoing documentation of growth across subjects, capturing thinking, iteration, and creativity. Portfolios show who a learner is becoming, not just what they can recall.
Collaborative Moderation and AI-enhanced Feedback: Trusting teachers to make professional judgements, supported by technology that makes feedback faster, fairer, and more formative. AI can help analyse patterns, but the human relationship remains the heartbeat of assessment.
This isn’t idealism; it’s already happening in innovative schools across Australia and internationally. What’s missing is the systemic courage to let go of control.
What Are We Afraid Of?
Maybe we hold on to external exams because they make learning look neat. They give governments data, university rankings, and parents reassurance. But learning isn’t neat. It’s complex, relational, and beautifully unpredictable.
Maybe what we’re really afraid of is what authentic assessment would reveal: that learning can’t be ranked, reduced, or regulated.
And that’s the point.
The EduShift Challenge
If we say we value curiosity, collaboration, and critical thinking, then our assessments must reflect those values.
If we want young people who thrive in complexity, then our systems must embrace it.
If we believe in human potential, then we must design for it, not test against it.
It’s time to stop designing education around what’s easy to measure, and start measuring what’s meaningful to learn.
Because the problem isn’t that students studied the wrong history or faced the wrong quote. The problem is that the system itself has become the wrong exam.
.png)



Comments