Ofsted and Ofqual have been asked by ministers to produce an updated plan for how they will approach artificial intelligence (AI), amid concerns about the potential for the technology to be abused in education.
The plan should include the work the organisations are doing to understand, assess and manage the current and emerging risks posed by AI to their sector, the government has said.
The schools inspectorate and the exams watchdog are among a number of government bodies that have been asked to respond to ministers by the end of April.
The letters from ministers to Sir Martyn Oliver, the new chief inspector of Ofsted, and Sir Ian Bauckham, Ofqual’s new interim chief regulator, say that the government would particularly value an update from the organisations, given how “significantly” abuses of AI could impact the sectors that they regulate.
They are among a number of regulators that have been asked to provide the government with an update by 30 April.
The letters from education secretary Gillian Keegan and science and technology secretary Michelle Donelan tell the bodies to set out their strategic approach for AI and the steps they are taking in line with the expectations in the government’s White Paper, A Pro-innovation Approach to AI Regulation, published last year.
‘Safe, responsible’ AI innovation
The White Paper aims to create a framework governing the use of AI “in order to drive safe, responsible innovation”. It is based on five principles: safety, security and robustness; appropriate transparency and explainability; fairness; accountability and governance; and contestability and redress.
Ministers have asked Ofsted and Ofqual to set out the steps they are already taking to adopt the AI principles set out in the paper, including “concrete examples of the actions they have taken” where possible.
The regulators have been asked to produce a summary of guidance they have issued or plan to issue in response to the principles in the White Paper, and the steps the organisations they regulate should take in line with its principles.
They have also been asked to set out the work they are doing “to understand, assess and manage the current and emerging risks posed by AI as relevant to their sector and remit”. Regulators have been given a deadline of 30 April, and Ofsted has said it will respond in due course.
An Ofqual spokesperson said: ”Ofqual is open to well-evidenced innovation, including the use of artificial intelligence. Our focus will continue to be on ensuring fairness for students, maintaining standards and protecting the security of assessments.”
The government has put forward a non-statutory framework for overseeing the use of AI, but has said it may be necessary later to introduce a statutory duty for regulators giving due regard to the principles.
Last month the Department for Education published a report saying that teachers needed guidance to help them identify and manage pupils’ use of AI and respond to cheating.
The report on generative AI in education also called for funding to be put into researching the development of AI content detectors to support educators as the technology becomes more sophisticated.
The recommendations were published in a report by the DfE’s Open Innovation Team, and were based on interviews with teachers, academics and experts in the edtech industry.
For the latest education news and analysis delivered directly to your inbox every weekday morning, sign up to the Tes Daily newsletter