Here’s a statement for you to get your teeth into: “The frameworks for the assessment of writing] do not provide sufficient flexibility for teachers to reach judgements which are representative of pupils’ overall ability in this subject.”
This is just the sort of sentence and a half that could have come from NAHT’s “Redressing the Balance” report on primary assessment, which was widely reported in Tes when it was published in January. In fact, the source is the government itself. This statement comes straight from the pages of the government’s consultation on primary assessment, which closes on 22 June.
Many will be aware of the proposals within the assessment consultation, which have been widely reported and debated, but far fewer will have had the time and space to read the full document. In relation to “secure-fit” assessment for writing, the government has been surprisingly candid about the rationale for change, given how recently it was introduced.
The framework is fundamentally flawed. It does not lead to valid judgements of pupils’ overall ability in this subject, not only because of “secure fit” but because of the disproportionate emphasis on the technical aspects of writing.
I found this admission refreshingly honest. While I suspected that many in the Department for Education recognised this failing, I was surprised that they said it publicly. But this admission begs one very big question: why are they still using it?
On the 29 June, schools will be required to submit their 2017 teacher assessment results. “Secure-fit” assessment of writing will mean that, once again, many children who are clearly excellent writers will be incorrectly labelled as working below the expected standard, simply because teachers are not permitted to use their own judgement about the balance of their abilities.
There is a light at the end of the tunnel. From the 2017-18 academic year, changes to the assessment framework for writing and the end of “secure fit” will come in - something that we have wholeheartedly welcomed. But promises of a better tomorrow do little to soften the blows felt in the here and now.
Teachers and school leaders rightly feel deep unease about having to label children as failing to reach a set standard when they are otherwise proficient writers. They worry about the impact that these labels might have on the more vulnerable pupils and about how these unfair and inaccurate judgements will ultimately feed into school accountability data.
Dodgy data
We all know that it does not matter how many caveats or explanations you issue alongside dodgy data - as soon as it is out there people believe it has value and will use it. Last year, NAHT secured a commitment from the government to instruct Ofsted and regional schools commissioners not to make interventions on the basis of this data. And today we’ve written to the government to confirm that this now applies to 2017 data as well as 2016.
We have also gone further: we have made the case that there is no valid reason for publishing this data in the first place. Let’s remove the risk that data will be taken out of context and misused. And whilst serious question marks hang over and skew judgements in writing, it makes no sense for these judgements to be used within floor or coasting standards either.
As we attempt to move towards a system that parents, teachers and school leaders can have confidence in, these three measures would be a sensible short-term step. A sensible medium-term step is the end of “secure-fit” teacher assessment for writing from the 2017-18 academic year. But this won’t solve all the issues related to teacher assessment of writing. Not least, we have serious concerns about the disproportionate time and resources involved in collating evidence and moderating judgements.
So what might the sensible long-term step be? NAHT supports suggestions from experts like Daisy Christodoulou who say that rather than assessing writing against a predetermined list of criteria or a rubric, teachers could be presented with two pieces of writing side by side for them to make a comparative judgement between the two.
Simply, which is better? This can be done for individual pieces of writing or whole portfolios of work. Early, small-scale studies have found high levels of reliability when compared with the standard rubric approach. On this basis, comparative judgement looks a very interesting place to start.
Nick Brook is deputy general secretary of the NAHT school leaders’ union. He tweets as @nick_brook
Want to keep up with the latest education news and opinion? Follow Tes on Twitter and like Tes on Facebook