‘It’s impossible to meet the SQA grading demands’

SQA grading guidelines after the cancellation of exams ‘don’t make sense - statistically or morally’, says this teacher
28th April 2020, 10:09am

Share

‘It’s impossible to meet the SQA grading demands’

https://www.tes.com/magazine/archive/its-impossible-meet-sqa-grading-demands
Coronavirus: The Sqa Guidance On Grading After Exams Were Cancelled Doesn't Make Sense, Warns This Teacher

Last week, the Scottish Qualifications Authority (SQA) delivered a promised public statement about how qualification grades, based on teacher estimates, will be delivered this year amid the challenges of Covid-19. The finer details for teachers are outlined in a 10-page PDF document.

I have no previous major gripes against the SQA, and have huge respect for the teachers and other professionals who work hard every year to develop, deliver and mark our Scottish exams and coursework. This year, they are working under the same difficult isolation restrictions as everyone else, and have the unenviable task of finding ways to certify courses that are as fair as possible to pupils, without having any standardised results. I wouldn’t want to be in their shoes.

However, the new SQA document is confusing and disheartening, to say the least.


Coronavirus: ‘Cancel next year’s exams as well,’ says union

SQA: Exams body to be questioned by MSPs on Friday

Teachers’ fears: SQA responds to concerns about new system for grading


Many have argued that the SQA’s plans to adjust results (based on how schools performed in previous years or how accurate that school’s estimates have been previously) could impact most heavily on disadvantaged pupils from low-income families. But I’ll focus on why the technicalities that teachers are expected to implement are impractical and don’t make sense - statistically or morally.

Coronavirus exam cancellations: Will students get fair grades?

The new SQA document summarises the key challenge for teachers:

“An estimate is a holistic professional judgement based on a candidate’s attainment in all aspects of the course (ie, all course components) and should reflect the candidate’s demonstrated and inferred attainment of the required skills, knowledge and understanding for the predicted grade and band estimated.”

Most teachers will have already done this, using all the data we have - prelim exams, class tests, coursework, considerations of pupil illness, support needs and more - to estimate the standard 1-9 grade bands for each pupil, where the smallest band width is 5 per cent (see tables here). It can be heartbreaking when we optimistically suspect, but can’t prove, that a certain student may quite possibly have upped their game to reach a band beyond what we have evidence for - but we are professionals and we want to strive for objectivity. I’m confident that the 1-9 bands I’ve estimated for my students are fair predictions of what they would get in a final exam. I feel it is impossible for me to be more accurate than that, and I have a lot of good evidence for my classes - many other teachers may not be so lucky.

But now the SQA is demanding the impossible. It has divided up the original grade bands so that there are now 19, some of which are just 2 percentage points in width. We have to not only place each student into one of these narrower bands, but then rank them relative to the other students in that band.

What is the SQA’s most important job this year? It might say it is to ensure that all results are as objective as possible. This is easy in a normal year because there are standardised exams and coursework, all marked in the same way. However, its new demands won’t reduce subjectivity, but increase it.

Here’s why. The only practical way I can see to meet the SQA’s demands is by estimating a percentage score for each student. Some subjects could get this easily from a prelim exam, but we have been rightly told that our judgements should be holistic and based on more than just that one event.

With the best will in the world, using all our data and striving for objectivity, we can’t predict a now-not-going-to-happen future exam result for any student to a great degree of certainty.

Given that our estimates already have wide error bars, there’s no way we should be allocating each student into even smaller bands. In fact, because the bands are now half the size, the chances of that pupil now being in the wrong band could be doubled. If you make the bullseye smaller, it’s harder to hit it with a dart.

Then we must rank all the students within each band. The SQA document is clear that “unique rankings with no ties are expected within each refined band for most courses”, with some exceptions for large cohorts such as English. So even more statistically impossible precision is demanded of us.

Anyway, let’s say I’ve estimated percentage scores for every student from a subject cohort of 50, and put them into the refined bands. Say I have three students who I placed in the SQA’s new refined band 8 (notional range 62-64 per cent), so I have to rank them 1-3. There is only a two in nine chance that those pupils will have conveniently landed in that band with scores of 62, 63 and 64 per cent. All other combinations involve ties.

This is already splitting hairs to unjustified degrees. With likely errors of 3 percentage points or more, no scientist would even say that 62, 63 and 64 can be considered different numbers.

Regardless, at this point the SQA wants us to separate students even further if, say, two score 62 per cent. In such a situation it’s extremely difficult not to be influenced by thoughts like “Sally mucked around a bit while Suzie always worked hard, so I’ll rank Suzie higher”. This sort of subconscious, subjective thinking is only human, but it’s unfair and unprofessional. The SQA’s complicating of the bands system will force teachers into bad judgements.

Overall, what the SQA is encouraging is called false precision. It’s well illustrated by the old joke about a museum guide saying “that fossil is 65,000,004 years old”, because when they started the job four years ago they were told it was 65 million years old. Ironically, any student doing this in an SQA science exam would lose a mark.

The usual nine bands, of at least 5 percentage points width, are all the precision that can be justified. That’s what is happening in England, Wales and Northern Ireland - teachers are placing the students in the usual bands, with no convoluted changes.

I do hope there is a valid reason for the changes which the SQA hasn’t revealed yet. Will these extra measures somehow help it with checking for anomalies in teacher estimates compared to previous years, and, if so, how?

The SQA must fully explain why this extremely complicated new system is needed and how it will be applied, using crystal-clear statistical arguments, and addressing the concerns above. There’s a problem with the attitude of “the general public won’t be interested in the technicalities” in many public bodies, when actually there are a lot of geeks like me who would very much like to see and understand the finer details of procedures applied to our kids’ data. That’s called accountability - show us all the maths.

The exam mess this year is nobody’s fault but the coronavirus. But it can’t be solved by adding more mess on top of it. Just trust the teachers please, SQA.

The writer is a teacher in a Scottish secondary school

Want to keep reading for free?

Register with Tes and you can read two free articles every month plus you'll have access to our range of award-winning newsletters.

Keep reading for just £1 per month

You've reached your limit of free articles this month. Subscribe for £1 per month for three months and get:

  • Unlimited access to all Tes magazine content
  • Exclusive subscriber-only stories
  • Award-winning email newsletters
Recent
Most read
Most shared