British Technology Firms and Child Protection Officials to Examine AI's Ability to Create Exploitation Content
Technology companies and child safety agencies will receive authority to evaluate whether AI tools can generate child exploitation material under new British legislation.
Substantial Rise in AI-Generated Illegal Content
The declaration coincided with revelations from a protection monitoring body showing that reports of AI-generated child sexual abuse material have increased dramatically in the last twelve months, growing from 199 in 2024 to 426 in 2025.
New Regulatory Framework
Under the changes, the authorities will allow designated AI companies and child safety groups to examine AI models – the underlying systems for chatbots and image generators – and ensure they have adequate protective measures to stop them from creating images of child sexual abuse.
"Fundamentally about preventing exploitation before it happens," stated Kanishka Narayan, adding: "Specialists, under strict protocols, can now detect the danger in AI models early."
Addressing Regulatory Obstacles
The changes have been introduced because it is against the law to produce and possess CSAM, meaning that AI developers and others cannot create such images as part of a testing regime. Until now, authorities had to delay action until AI-generated CSAM was uploaded online before dealing with it.
This law is aimed at preventing that issue by enabling to stop the production of those images at source.
Legal Framework
The amendments are being introduced by the government as modifications to the criminal justice legislation, which is also establishing a ban on possessing, creating or sharing AI systems developed to create exploitative content.
Real-World Impact
This recently, the minister visited the London headquarters of a children's helpline and listened to a mock-up conversation to counsellors featuring a account of AI-based abuse. The call depicted a adolescent seeking help after facing extortion using a explicit AI-generated image of themselves, created using AI.
"When I learn about young people experiencing blackmail online, it is a source of intense frustration in me and justified anger amongst families," he said.
Alarming Statistics
A prominent internet monitoring foundation reported that instances of AI-generated abuse content – such as online pages that may include numerous images – had more than doubled so far this year.
Instances of category A content – the gravest form of exploitation – increased from 2,621 images or videos to 3,086.
- Female children were predominantly targeted, accounting for 94% of prohibited AI depictions in 2025
- Depictions of newborns to two-year-olds rose from five in 2024 to 92 in 2025
Industry Reaction
The law change could "represent a vital step to ensure AI products are secure before they are launched," stated the chief executive of the internet monitoring organization.
"Artificial intelligence systems have made it so victims can be victimised all over again with just a few clicks, providing criminals the capability to make possibly endless quantities of advanced, lifelike child sexual abuse material," she continued. "Content which further exploits survivors' trauma, and makes children, particularly girls, more vulnerable on and off line."
Support Session Information
The children's helpline also published information of counselling sessions where AI has been mentioned. AI-related risks discussed in the sessions include:
- Using AI to rate body size, body and appearance
- Chatbots dissuading young people from talking to trusted guardians about harm
- Being bullied online with AI-generated material
- Online extortion using AI-manipulated images
Between April and September this year, Childline delivered 367 counselling sessions where AI, chatbots and associated topics were mentioned, significantly more as many as in the equivalent timeframe last year.
Fifty percent of the mentions of AI in the 2025 sessions were connected with mental health and wellbeing, encompassing using chatbots for assistance and AI therapeutic apps.