From R numbers to death rates, statistics have been at the centre of our Covid-19 dystopia, and education is not escaping.
The first signs of exams’ data controversy emerged when July’s International Baccalaureate results failed to match expectations, prompting complaints that the fates of hardworking students were being determined by an opaque “algorithm”.
Then, last week, the news that more than a quarter of Scotland’s teacher-assessed Higher grades were changed - with cuts for those in deprived areas twice as big - sparked protests and forced education secretary John Swinney into a U-turn.
In England, at the time of writing, pressure was building on ministers. Tes revealed in July that as many as 40 per cent of teacher-assessed GCSE and A-level grades could be moderated downwards, with insiders fearing a backlash because results would look “terrible”.
And whatever happens this week, ministers and Ofqual will not be able to breathe a sigh of relief when it’s over. They may well face more of the same fury that greeted Mr Swinney when next week’s GCSE grades are released. But will it be justified?
The first thing to restate is that the statistical model that Ofqual has used to determine final grades is nothing new. Often known as comparable outcomes, it has been countering grade inflation for the best part of a decade.
The approach has a significant flaw: by favouring stability, it makes it very difficult to recognise genuine improvements in standards. But it is tried and tested.
Second, although huge numbers of teacher-assessed grades are being ignored when calculating final grades, teacher judgements have still played a crucial role. It is teachers who drew up the rank orders that decide which student gets which result within the overall grading distribution.
In 2020, the absence of any actual exam papers means Ofqual has decided to fill the gap by setting grade distributions at school, as well as national, level. This allows it to crosscheck and, as Tes uncovered, often override teacher-assessed grades.
This is inherently unfair for students in those schools whose standards have risen overall but whose previous poor performances are being used as grade benchmarks.
But it does not mean, as some suggest, that high-achieving students in historically low-performing schools are bound to be disadvantaged. Ofqual’s model is sophisticated enough to consider individuals’ prior achievement.
Another potential unfairness is in fact caused by teacher assessment, with research showing that it is likely to disadvantage minority students compared with actual exams.
But the biggest problem Ofqual faces is the huge clash between its two proxies for exam results. This year’s teacher grades produced A-level results 12 per cent higher than its modelling and GCSE results 9 per cent higher. The regulator, with its remit to maintain credibility and standards, had a responsibility to ask whether such a sudden leap in achievement was plausible. Its answer has effectively been a straightforward “no”.
But things can change. This week, a BBC TV news anchor suggested that those huge rises should stand and be celebrated. It is also argued that deciding historical differences between schools should continue “bakes in” inequalities. But would a dramatic anomalous change in a single year help to tackle the real causes of that inequality?
There will doubtless be many individual injustices emerging from this year’s results and a hastily cobbled-together system that was never going to be perfect. But any quick fixes need to be carefully considered to ensure that they do not come at the expense of those with results from other years or of the wider credibility of the exams system.
@wstewarttes
This article originally appeared in the 14 August 2020 issue under the headline “Those eyeing up exam change need to go beyond a 2020 vision”