British Technology Companies and Child Protection Agencies to Examine AI's Ability to Generate Abuse Content
Technology companies and child safety agencies will be granted permission to assess whether artificial intelligence tools can generate child exploitation images under recently introduced British legislation.
Significant Rise in AI-Generated Harmful Material
The declaration came as revelations from a safety monitoring body showing that reports of AI-generated CSAM have more than doubled in the past year, rising from 199 in 2024 to 426 in 2025.
New Regulatory Structure
Under the changes, the government will permit approved AI companies and child protection organizations to inspect AI systems â the underlying technology for chatbots and image generators â and ensure they have adequate protective measures to prevent them from creating depictions of child exploitation.
"Fundamentally about preventing exploitation before it happens," stated Kanishka Narayan, adding: "Experts, under strict protocols, can now identify the danger in AI systems promptly."
Tackling Regulatory Obstacles
The changes have been introduced because it is against the law to create and possess CSAM, meaning that AI creators and others cannot create such images as part of a evaluation regime. Until now, authorities had to wait until AI-generated CSAM was uploaded online before addressing it.
This law is aimed at preventing that issue by helping to stop the creation of those materials at their origin.
Legal Framework
The amendments are being introduced by the authorities as revisions to the criminal justice legislation, which is also establishing a ban on owning, creating or sharing AI systems designed to generate exploitative content.
Real-World Impact
This week, the minister toured the London headquarters of Childline and listened to a mock-up conversation to advisors featuring a account of AI-based exploitation. The interaction portrayed a adolescent requesting help after facing extortion using a sexualised AI-generated image of himself, created using AI.
"When I learn about young people experiencing blackmail online, it is a cause of extreme anger in me and rightful concern amongst parents," he said.
Concerning Statistics
A leading online safety foundation stated that instances of AI-generated exploitation content â such as webpages that may contain numerous images â had more than doubled so far this year.
Instances of the most severe material â the most serious form of exploitation â rose from 2,621 visual files to 3,086.
- Female children were predominantly victimized, accounting for 94% of prohibited AI images in 2025
- Portrayals of newborns to toddlers increased from five in 2024 to 92 in 2025
Industry Response
The law change could "constitute a vital step to ensure AI products are secure before they are released," commented the head of the internet monitoring organization.
"Artificial intelligence systems have enabled so survivors can be targeted all over again with just a few clicks, providing criminals the ability to make possibly endless quantities of advanced, lifelike child sexual abuse material," she added. "Material which additionally exploits survivors' suffering, and renders young people, especially girls, more vulnerable on and off line."
Support Session Information
The children's helpline also published details of support interactions where AI has been mentioned. AI-related harms mentioned in the sessions include:
- Employing AI to evaluate body size, body and looks
- Chatbots dissuading young people from talking to trusted adults about harm
- Facing harassment online with AI-generated content
- Digital extortion using AI-faked pictures
Between April and September this year, the helpline delivered 367 support sessions where AI, chatbots and associated topics were mentioned, significantly more as many as in the same period last year.
Half of the references of AI in the 2025 sessions were connected with psychological wellbeing and wellbeing, encompassing using AI assistants for assistance and AI therapeutic applications.