A 4-point plan for transparent AI use in schools

A former teacher who has written a book on artificial intelligence explains how an AI transparency statement could help ease concerns over its use in schools
22nd April 2024, 6:00am

Share

A 4-point plan for transparent AI use in schools

https://www.tes.com/magazine/analysis/general/4-point-plan-transparent-ai-use-schools-education
A 4-point plan for transparent AI use in schools

In the past few years the role that artificial intelligence (AI) could play in education has gone from being an esoteric sci-fi concept to something at the heart of numerous areas of debate and discussion.

The education secretary has said that AI could take the “heavy lifting” out of teacher workloads, global school groups are appointing dedicated staff to oversee its use, warnings have been issued about its impact on homework and coursework, and numerous schools are now teaching how to use it.

In short, AI is something that is very much here, and the question that schools and their teachers are facing is how to use it: embrace it and give free rein? Or put a blanket ban in place?

An open approach to AI

The latter is perhaps tempting but when research tells us that a lot of generative AI use is hidden, it makes it very hard to ensure that a ban is being enforced.

What’s more, even if AI is not banned - allowing students to use it or even teachers, to help with their lesson plans, for example - people are often reticent about admitting to using it, and it’s seen as a bit of a hack. So it’s hard to know what is human-created and what is machine-made.

This is something I had to wrestle with when writing my new book God-like: a 500-year history of artificial intelligence, and it is why I included an AI transparency statement at the start.

I wanted to be clear where I had - and had not - used the technology, because I know that there is so much scepticism and cynicism about the provenance of what we read and see.

The four-point plan

To that end, I created a four-point checklist to outline what I had, or had not, used AI for. This list is as follows:

  1. Has any text been generated using AI? And if so, has this then been edited?
  2. Has any text been suggested using AI? This might include asking ChatGPT for an outline, or having the next paragraph drafted based on previous text.
  3. Has any text been “improved” using AI? This might include an AI system like Grammarly offering suggestions to reorder sentences or words to increase a clarity score.
  4. Has the text been proofed using an AI system? And if so, have suggestions for spelling and grammar been accepted or rejected automatically or based on human discretion?

 

For my own book, the answers were no, no, no, yes - but with my own decisions on each spelling suggestion flagged.

What has been particularly notable, though, is that, as a former teacher, I have seen a huge reaction to this checklist. Former colleagues across the sector have been telling me that they see this as part of the solution to their dilemmas around AI use.

Providing this checklist to students can allow them to be transparent about where they’ve used generative AI in a clear and concise manner. It sets clear parameters on what the different uses of AI are.

It also gives teachers the ability to make it clear which type of use is allowed for different pieces of work - rather than having a free-for-all or total ban.

This checklist could work for teachers, too, in modelling the same openness and being clear with students where they have used AI in preparing lessons or schemes of work.

Moreover, for senior leaders this could help to set some parameters that they may wish to use when creating communications to parents or putting new policies online. It could bring some sense of transparency and ethical clarity over a technology where so much discourse is negative.

Embracing change

Of course, the four-point framework is built on trust - chiefly between teachers and students, as there can be no guarantee that students are doing what they say they are.

But there’s always been a trust dimension with homework, and with teachers using the framework to offer transparency, too, it does model a responsible approach.

Ultimately, AI is here to stay and future generations of students will only become more adept at its use. Education can’t shy away from this - instead we must consider how we can create the processes and policies that help us to incorporate it into our working lives.

Kester Brewin taught mathematics in London secondary schools for 25 years. He tweets @kesterbrewin and his new book, God-like: a 500-year history of AI, which includes the AI transparency statement, is available here

For the latest education news and analysis delivered directly to your inbox every weekday morning, sign up to the Tes Daily newsletter

You need a Tes subscription to read this article

Subscribe now to read this article and get other subscriber-only content:

  • Unlimited access to all Tes magazine content
  • Exclusive subscriber-only stories
  • Award-winning email newsletters

Already a subscriber? Log in

You need a subscription to read this article

Subscribe now to read this article and get other subscriber-only content, including:

  • Unlimited access to all Tes magazine content
  • Exclusive subscriber-only stories
  • Award-winning email newsletters
Recent
Most read
Most shared