Rubbish in, rubbish out. This is a toned-down version of an expression in management circles that sums up what happens if the quality of the data you base your decisions on is bad.
And that’s one potential stumbling block for the government’s new literacy and numeracy benchmarking tool for primary schools, which is gradually making its way into councils. Many question the quality of the information that is being fed into it.
The tool will - for the first time - enable primary headteachers to easily compare their school’s performance in literacy and numeracy with any school in Scotland. It will also generate a virtual comparator based on the characteristics of their pupils - including factors such as levels of disadvantage, gender and ethnicity - to ensure they are as far as possible able to compare like with like.
The tool will tell headteachers about the real schools operating in contexts similar to theirs, with the intention of sparking conversations and “learning and collaborating with others”, as the education secretary John Swinney puts it.
It was two years ago that the government began reporting whether pupils in P1, P4, P7 and S3 were attaining the expected level for their age and stage in reading, writing, listening and talking and numeracy, based on teacher judgements. The figures show huge fluctuations in pupil performance between councils. What remains unclear is whether these fluctuations are caused by children exceling in some areas and floundering in others, or differences in the way teachers rate pupil performance.
‘Experimental’ figures
The government has warned against using the figures to compare councils in the past, saying they are “experimental” and the deprivation context of councils must be considered when interpreting the statistics. It says the data will become more reliable over time with the advent of the new standardised assessments in literacy and numeracy, which pupils are due to sit for the first time this year. The results of the assessments will not be recorded on the tool, but with teachers using the tests to inform their judgements the hope is that consistency improves.
It is questionable how much credence headteachers will give the new platform. If they are told they are being outranked by the virtual comparator, will they dismiss the findings, believing the data is flawed? And if they compare themselves to the brick and mortar “similar schools” the tool will generate for them and find they come out badly, will they take heed?
The second issue is that the tool is extremely narrow. Primary school is about much more then learning how to read, write and count. Colin Sutherland, the retired headteacher who helped design Insight, the benchmarking tool for the secondary schools, warned that you have to be careful what you measure because it becomes the focus. He predicted that a tool for benchmarking in primary would focus on the three “responsibility of all” areas of the curriculum: literacy, numeracy and health and wellbeing.
Health and wellbeing is conspicuous by its absence. But this, of course, is the tool’s first iteration and it may go on to become about more than literacy and numeracy.
In the meantime, we need councils and the new regional improvement collaboratives to work hard to make sure these figures can be trusted, striving to get that consistency in judgment between teachers, between schools and between councils.
If they don’t, questions about the quality of the data will keep coming - and that could lead to the tool being populated not with teacher judgements, but with standardised assessment results instead.
If that comes to pass, Scotland will end up with the high-stakes testing regime everyone is so desperate to avoid.
@Emma_Seith