UK Technology Companies and Child Protection Officials to Examine AI's Ability to Generate Exploitation Content
Tech firms and child safety organizations will be granted authority to evaluate whether AI tools can produce child exploitation material under recently introduced UK laws.
Significant Rise in AI-Generated Harmful Material
The declaration coincided with findings from a safety monitoring body showing that cases of AI-generated child sexual abuse material have more than doubled in the past year, growing from 199 in 2024 to 426 in 2025.
New Regulatory Structure
Under the changes, the authorities will allow designated AI developers and child safety groups to examine AI systems – the underlying systems for chatbots and image generators – and verify they have adequate protective measures to stop them from producing depictions of child exploitation.
"Fundamentally about stopping exploitation before it occurs," declared Kanishka Narayan, adding: "Experts, under rigorous conditions, can now identify the danger in AI models promptly."
Tackling Regulatory Obstacles
The amendments have been introduced because it is against the law to produce and own CSAM, meaning that AI creators and others cannot generate such images as part of a evaluation process. Until now, officials had to delay action until AI-generated CSAM was uploaded online before addressing it.
This legislation is aimed at preventing that problem by helping to stop the creation of those images at their origin.
Legal Framework
The changes are being introduced by the authorities as revisions to the criminal justice legislation, which is also implementing a ban on owning, producing or sharing AI models designed to generate exploitative content.
Real-World Impact
This week, the minister visited the London headquarters of a children's helpline and heard a mock-up call to counsellors involving a account of AI-based exploitation. The call portrayed a adolescent seeking help after facing extortion using a explicit deepfake of himself, created using AI.
"When I hear about young people experiencing blackmail online, it is a source of intense anger in me and justified concern amongst families," he stated.
Alarming Statistics
A prominent online safety foundation reported that cases of AI-generated abuse material – such as online pages that may contain multiple files – had more than doubled so far this year.
Cases of category A content – the gravest form of abuse – rose from 2,621 visual files to 3,086.
- Female children were predominantly targeted, accounting for 94% of prohibited AI depictions in 2025
- Portrayals of newborns to toddlers increased from five in 2024 to 92 in 2025
Industry Response
The legislative amendment could "constitute a crucial step to guarantee AI tools are safe before they are launched," stated the chief executive of the online safety organization.
"AI tools have enabled so victims can be victimised all over again with just a simple actions, giving criminals the capability to create potentially endless quantities of advanced, lifelike exploitative content," she added. "Material which further exploits survivors' suffering, and renders children, especially female children, more vulnerable on and off line."
Support Interaction Data
The children's helpline also released details of counselling interactions where AI has been mentioned. AI-related risks mentioned in the conversations comprise:
- Using AI to evaluate weight, body and looks
- Chatbots discouraging young people from consulting safe guardians about harm
- Being bullied online with AI-generated material
- Digital extortion using AI-manipulated images
Between April and September this year, the helpline delivered 367 counselling sessions where AI, conversational AI and related terms were mentioned, significantly more as many as in the same period last year.
Half of the references of AI in the 2025 sessions were related to mental health and wellbeing, including using AI assistants for assistance and AI therapeutic apps.