3 AI safeguarding issues schools should know about

As AI becomes ever more embedded into everyday life, there are some emerging safeguarding concerns that schools should consider
13th November 2023, 6:00am

Share

3 AI safeguarding issues schools should know about

https://www.tes.com/magazine/analysis/general/3-ai-safeguarding-concerns-schools-should-know
3 AI safeguarding concerns schools should know

In recent months a lot has been written about the growing use of artificial intelligence in education.

This discussion has mostly focused on the educational aspects of what AI can offer - from the Department for Education issung a call for evidence about the impact of AI on the education of young people to £2 million in funding being given to Oak National Academy to develop its own AI teaching tools.

However, while there clearly are positives from AI that can help those in education, we must also be aware that the rise of AI both in schools and beyond is starting to reveal worrying safeguarding issues that schools and their designated safeguarding leads need to be ahead of.

AI in schools: safeguarding issues

1. AI for therapy

A number of organisations have promoted the use of AI programmes as potential replacements for human therapists. The idea is intriguing and, in an era when child and adolescent mental health services’ waiting lists can leave young people waiting years for support, it sounds like it could help in some way.

In practice, though, this sort of AI use could be extremely dangerous, with young people revealing incredibly sensitive information to a machine and into a dataset owned by some unseen entity. Chats are often saved, too, and so if an account was compromised, it would be accessible to anyone who wished to see it.

What’s more, the sophistication of AI chatbots is perhaps not at the level we would like to think it’s at.

For example, in a quick test of some of the most popular AI tools, I could discuss how to self-harm and ways to hide anorexia worryingly easily, and there is the potential for AI responses to actually exacerbate mental health concerns due to incorrect answers or a lack of context.

2. AI-generated images

While most of the focus on AI has been around its ability to produce text at speed, it can also produce images.

The danger of this was made clear by a news report in Spain in September revealing that AI-generated naked images of local young girls were being created by local boys aged between 12 and 14, who were putting everyday social media pictures of the girls into an AI tool.

That such images are able to be generated by pretty much anybody on any computer is a huge concern with regard to, as in this case, the creation and distribution of child pornography. 

Even if such content is not distributed, the fact that any young person could generate this sort of illegal and harmful material could be very damaging for them as well, potentially normalising illegal images and extreme pornography without any regulation as to how it is produced.

Other types of AI-generated content are also a significant concern for young people. The creation of humiliating or upsetting images or videos of other young people or teachers is now very possible. Even just a year or so ago this would have required extremely sophisticated and expensive software.

3. AI for advice

There is a growing awareness that AI can be used not just to generate content but also to help guide decisions, thanks to its ability to process large amounts of information quickly. One school has even said it will use AI to act as a formal adviser to the head.

However, while an experienced headteacher or professional in another domain is likely to be able to tell if the advice from AI is good or bad, young people will not be as discerning.

What’s more, given its anonymity, it is probable young people will increasingly turn to AI for advice on sex, drugs and other topics they might feel uncomfortable about discussing with a grown-up or even friends. 

Yet this could have consequences varying from the unhelpful to the dangerous. There are numerous stories of AI not understanding the question being asked of it and then coming up with an answer that does not make sense or uses made-up facts to justify its answer.

AI also lacks the human wisdom to go with the data that it has been trained on, and this can be very dangerous, depending upon what advice is being sought. 

For example, a researcher found that they could pose as a child and use an AI tool to gather advice on how to cover up bruises ahead of a visit from child protection services. In another example, the AI tool gave advice to someone posing as a 13-year-girl on how to lie to her parents about taking a trip with a 31-year-old man.

Being aware of the risks

The AI future is both exciting and scary in equal measures. The DfE’s report on AI will hopefully start to shape a considered focus on how it could be harnessed in schools in a positive way, such as to reduce teacher workload or augment lesson planning.

However, amid the excitement, it is imperative that schools do not take their eye off the ball when considering the potential dangers that AI also presents.

Luke Ramsden is deputy head of an independent school and chair of trustees for the Schools Consent Project

You need a Tes subscription to read this article

Subscribe now to read this article and get other subscriber-only content:

  • Unlimited access to all Tes magazine content
  • Exclusive subscriber-only stories
  • Award-winning email newsletters

Already a subscriber? Log in

You need a subscription to read this article

Subscribe now to read this article and get other subscriber-only content, including:

  • Unlimited access to all Tes magazine content
  • Exclusive subscriber-only stories
  • Award-winning email newsletters

topics in this article

Recent
Most read
Most shared