- Home
- 5 reasons why grading U-turn was an ‘avoidable crisis’
5 reasons why grading U-turn was an ‘avoidable crisis’
The climb-down over A-level grading by the government and the resulting confusion was an “avoidable crisis”, as the statistical modelling used to moderate grades was not based “in the real world”, a leading thinktank has said.
In a blog for the Education Policy Institute, Jon Andrews, the organisation’s deputy head of research, lays out five reasons why Monday’s U-turn could have been avoided.
Background: U-turn will mean A-level and GCSE teacher grades stand
News: Heads ‘desperately worried’ ahead of GCSE results day
A levels 2020: Grades ‘utterly unfair and unfathomable’
A level and GCSE grades bias was not addressed
Ofqual’s original approach to grading A levels and GCSEs this year - using a combination of teacher-assessed grades and a rank order of students, followed by statistical moderation using schools’ past exam performance data - did not account for teacher or institutional bias against groups of students.
“It did not address any within-school bias in reference to grading and underrepresented groups,” Mr Andrews writes.
“The adjustments made as part of the process were to the overall performance of the school with no changes to the relative performance of pupils within them [the rank order],” he says.
Teacher assessment should have come after the algorithm grades
Having teachers predict students’ grades prior to any statistical moderation was putting the cart before the horse, says Mr Andrews.
“The proposal required teachers to generate grades for individual students from scratch, without any statistical-based starting point despite having to then conform to a statistical profile at the next stage,” he says.
“In short, we thought the ordering of teacher judgement followed by a statistical model was the wrong way round.”
He says that schools should have been shown what their rank order would look like “if pupils followed national patterns from recent years based on prior attainment and characteristics and the performance of the school”.
Teachers would then have been able to use their professional judgement to see how the rankings differed from students’ class work, coursework and homework.
Validation checks of centre assessed grades
Had this happened, there could have been validation checks as to whether schools were being overly optimistic about their grades - or whether particular groups of students were being disadvantaged within their school’s rank order.
These “could highlight where the decisions of schools have disproportionately moved the ranking of particular groups up or down or indeed the whole cohort up or down and schools would then need to justify those changes had they had a material impact on grades,” Mr Andrews writes.
While this approach could have resulted in a different grade distribution than that of previous years, “we considered that, on balance, fairness to pupils was a more important factor than a neat and consistent grade distribution.”
Ofqual’s statistical model was not grounded ‘in the real world’
While last week ministers celebrated the fact that the A-level grade distribution largely matched previous years’, this “does not matter if your total number of grades is correct [but] a large number of them have been assigned to the ‘wrong’ candidates”.
Results should have been tested in schools and colleges.
This is “why any statistical model needs to be grounded and tested in the real world for the purpose for which they are intended,” Mr Andrews says.
He adds: “If I do analysis at a national level of school funding allocations for EPI and I inadvertently get things wrong for two hundred schools and assign their money elsewhere, it is embarrassing, it is a failure of quality assurance, but it probably does not affect my results and it is highly unlikely to directly affect anyone in schools.”
“If I did it within the DfE when dealing with actual allocations to individual schools then we would have a funding crisis and calls for ministers to resign.”
Ofqual should have built uncertainty into its model
The uncertainty during every stage of the process when awarding moderated grades went unacknowledged.
“There was uncertainty in the ranking of students, there was uncertainty in the baseline performance of the school (based as it is on a ‘sample’ of students who attended in the last few years), so there is uncertainty in the outputs of the model,” Mr Andrews writes.
“Yet for the student, it resulted in a single grade. Could some of the controversy have been eased by presenting results with a confidence interval, if not explicitly as that then as some kind of ‘band’ of results?”
Mr Andrews concludes that reverting to the use of teacher-assessed grades was the “most pragmatic and fairest” approach in the circumstances.
He says EPI now expects Ofqual to “publish updated equalities analysis with these changes and these will need careful scrutiny - particularly in relation to GCSE results”, which will need more scrutiny as systemic inequality is more prevalent at GCSE because of the wider range of ability within the cohort.
“We urgently need a fully independent review of what happened this year so that the errors made are clearly understood, and so that the right lessons are learned for the future,” he says.
“We urge Ofqual and the government to develop a credible contingency plan in case the Covid-19 pandemic is still affecting schools next spring and they need to do that now.”
Ofqual has been contacted for comment.
Keep reading for just £1 per month
You've reached your limit of free articles this month. Subscribe for £1 per month for three months and get:
- Unlimited access to all Tes magazine content
- Exclusive subscriber-only stories
- Award-winning email newsletters