My AI experiment: how many students can beat detection software?

Humanities teacher Jack Dougall set 100 of his students the challenge of cheating on their homework with AI without being detected – and the results were shocking
10th September 2024, 5:00am

Share

My AI experiment: how many students can beat detection software?

https://www.tes.com/magazine/teaching-learning/secondary/how-easy-is-it-to-cheat-with-AI-without-being-detected-in-school
My AI experiment: how many students can beat detection software?

The rapid rise of generative AI tools like ChatGPT has introduced a new challenge into classroom life for teachers. Many of us have turned to AI detection software in an attempt to maintain academic integrity, some even encouraged by our institutions or exam boards.

But the evidence is quite clear: at best, AI detection software doesn’t work and, at worst, it persecutes the innocent, especially those with English as a second language.

Many research studies have examined the efficacy of AI detectors with undergraduate students, but there has been little study at school level. I suspected that these detectors would be even less effective in schools, where children write less, giving detectors even less to work with.

Detecting students cheating with AI

Moreover, most studies retrospectively test existing written work, ignoring the fact that in 2024 children are aware of detection methods and are actively trying to beat them. A quick search on social media brings up a host of videos explaining how to outsmart these detectors.

So I decided to put these AI detectors to the test myself, and conducted a simple experiment involving about 100 students aged 12 to 17.

The task was straightforward: complete their homework using only generative AI tools like ChatGPT, with the challenge of avoiding detection by a paid-for AI detector.

I ran all of their AI-written homework through one of the leading AI detection sites, and the results were not just eye-opening - they were alarming:

When children set themselves the specific aim of avoiding detection:

  • 50 per cent of the 12-year-olds bypassed detection.
  • 38 per cent of the 13-year-olds bypassed detection.
  • 100 per cent of the 17-year-olds bypassed detection.


Let that sink in. Half of the 12-year-olds, more than a third of 13-year-olds and every single 17-year-old in the study managed to fool the AI detector. The detector reported clear confidence that their work had been written entirely by humans, unaided by AI.

What’s more, 62 per cent of students who successfully evaded detection reported that either it was their first time using generative AI or they had very limited experience with it. Evading detection wasn’t a feat achieved by tech-savvy students with extensive AI knowledge - it was accessible to novices.

‘Always one step ahead’

These results underscore a crucial point: AI detectors are fundamentally unreliable, especially when students are actively trying to bypass them.

You might think that beating these detectors would require significant time and effort. However, 44 per cent of students completed their AI-generated homework and successfully bypassed detection in just five to 10 minutes, demonstrating that AI detectors pose little challenge to determined students.

Some educators argue that these detectors are still useful as deterrents or conversation starters about AI use. However, this creates a troubling dynamic where we accuse students without concrete proof, and they have no way to definitively disprove the accusation. This approach erodes trust and as the detector results have absolutely no legal standing, those social media videos advise students to simply deny, deny, deny.

As educators seek more innovative ways to detect AI use - such as tracking Google Docs history - students are quickly adapting. Again, social media is rife with ways to circumvent these methods, and students are and will always remain one step ahead in this technological arms race.

Exam boards place the responsibility for upholding academic integrity on to teachers, but I believe it’s time for schools to push back. Coursework, internal assessments and extended essays have always been vulnerable to abuse, and AI only exacerbates this.

Exam boards know that true integrity comes at a high cost - secure exams, invigilators, spot checks - yet they still ask teachers to tick an “authentication box” for work completed outside controlled conditions. Forgive me for being more than a little cynical.

Jack Dougall is a humanities and business teacher at The British School of Gran Canaria

For the latest research, pedagogy and classroom advice, sign up for our weekly Teaching Essentials newsletter

You need a Tes subscription to read this article

Subscribe now to read this article and get other subscriber-only content:

  • Unlimited access to all Tes magazine content
  • Exclusive subscriber-only stories
  • Award-winning email newsletters

Already a subscriber? Log in

You need a subscription to read this article

Subscribe now to read this article and get other subscriber-only content, including:

  • Unlimited access to all Tes magazine content
  • Exclusive subscriber-only stories
  • Award-winning email newsletters

topics in this article

Recent
Most read
Most shared