Assessing and measuring what goes on in the classroom is an attempt to make learning visible. Most of the time, the way we do this is to get pupils to produce some kind of work. We can’t see what is in their head, so we get them to try and represent their understanding either by writing something, drawing something, solving a problem, or ticking a box.
What matters is not what they write or draw, but whether we can use what they’ve produced to work out whether they have understood something or not.
If assessment questions are ambiguous or badly-designed, pupils who do have the right mental understanding can still get the question wrong.
Assessment problems
My favourite example of such a question is this one.
Put these words in alphabetical order: take, value, use.
The “right” answer is: take, use, value. That’s the order you’d find these words in a dictionary.
But what about the pupil who writes: aekt, aeluv, esu?
This pupil has put these words in alphabetical order, just not in the way the examiner was expecting.
This problem doesn’t just affect pupils. Most of the teacher-accountability systems we see are attempts to make teaching visible: to identify if teachers are causing learning to happen.
Triple problems
One way of doing this is to look at the assessment results of pupils. But other methods involve lesson observations and book scrutinies - and with these, too, badly-designed assessments can give us misleading information.
The development of “triple marking” is a good example of this problem. Schools started with the evidence-based principle that feedback helps pupils improve.
They then moved to the next stage: what can teachers do to give feedback to pupils - and how can managers and inspectors check they have done so?
Surely, if a teacher has written a comment, a pupil has responded and the teacher has responded to that comment, it’s proof of a feedback loop?
And so triple marking was born, with many negative consequences. It turned what should have been an immediate five-second classroom conversation into a process that took hours. It forced feedback into the straitjacket of a written dialogue, making it harder to use examples, images and quick oral questions.
Repeated issues
Triple marking did not, therefore, give us an accurate idea of who was and wasn’t giving quality feedback. Even worse than that, the focus on triple marking probably made teachers less likely to give quality feedback, because they were spending all their time writing comments in books, rather than thinking about ways they could adapt their next lesson based on what they knew about their pupils.
Triple marking crowded out the process it was supposed to measure. Not only did it give us inaccurate information about who was doing the right thing, but it also made it harder for teachers to do the right thing in the first place.
What’s the solution to this problem?
Unfortunately, there aren’t any quick fixes. Perhaps one good general piece of advice is to think through the consequences of any new assessment of teachers or pupils. Any assessment will encourage and discourage different types of behaviour, and we should be clear about what those are before we introduce it.
Daisy Christodoulou is director of education at No More Marking and the author of Making Good Progress? and Seven Myths about Education. She tweets @daisychristo
Want to keep up with the latest education news and opinion? Follow Tes on Twitter and Instagram, and like Tes on Facebook