The use of artificial intelligence (AI) in coursework places a “significant burden” on teachers, an exam board director has warned.
A debate about how coursework can be redesigned to make the most of AI while protecting against “undeclared” use is needed, Dr Steve Evans, deputy director of product for the exam body OCR, told a Westminster Education Forum on AI in education.
He said using external resources such as AI to gather information and develop ideas is an “important” part of coursework, but said its use puts additional pressure on teachers.
“It must be acknowledged that AI does place a significant burden on teachers, who are already under an enormous amount of pressure,” he said. “It means that coursework requires extra time and scrutiny.
“If teachers are going to spot undeclared use of AI, they need a close relationship with each of their students and a sound knowledge of each of their true capabilities,” he added.
AI and cheating
His comments come after a report by the think tank EDSK last year warned that the rise of AI would make it almost impossible to know if students were cheating on coursework.
Guidance from the Joint Council for Qualifications in 2023 said some coursework may need to be completed under direct supervision to prevent cheating.
The Department for Education has previously said that teachers need guidance to help them identify and manage pupils’ use of AI and respond to any cheating.
Christina Jones, CEO of the River Tees Multi Academy Trust, who was also speaking at the conference this morning, warned that “the focus on teachers being responsible for identifying use of AI puts a huge amount of pressure on teachers and schools”.
She called for a “wider debate on how it can be ensured teachers have the time, capacity and skills to really know and understand learners, and make sure their coursework is at the right level”.
More useful AI systems needed
Manjinder Kainth, chief executive of the AI marking platform Graide, said the traditional detection algorithms employed for finding issues such as plagiarism were not as useful for detecting AI content, and were often biased against English language learners and students from diverse backgrounds.
He instead argued for an approach that tracks students’ work over time and identifies elements such as changes in typing cadence, and copying and pasting, allowing teachers to “identify red flags and intervene proactively”.
He added that the classification AI systems that may be able to help teachers save time on marking can often have a “black box nature”, which means the reasons for their decision-making processes can be obscured.
Mr Kainth said the sector would need to advocate for the use of AI systems that produce conclusions that teachers can interpret, challenge and, if needed, overwrite.
For the latest education news and analysis delivered directly to your inbox every weekday morning, sign up to the Tes Daily newsletter