British Technology Firms and Child Safety Agencies to Test AI's Ability to Generate Exploitation Images

Technology companies and child safety agencies will receive permission to evaluate whether AI tools can generate child exploitation material under new British legislation.

Substantial Increase in AI-Generated Harmful Material

The declaration came as revelations from a safety watchdog showing that reports of AI-generated CSAM have more than doubled in the last twelve months, rising from 199 in 2024 to 426 in 2025.

New Regulatory Structure

Under the amendments, the government will allow designated AI companies and child safety groups to inspect AI systems – the foundational technology for chatbots and image generators – and ensure they have sufficient protective measures to stop them from producing depictions of child exploitation.

"Ultimately about stopping abuse before it occurs," stated the minister for AI and online safety, noting: "Specialists, under strict conditions, can now detect the danger in AI models early."

Addressing Regulatory Challenges

The changes have been implemented because it is against the law to produce and own CSAM, meaning that AI creators and other parties cannot create such images as part of a evaluation process. Until now, officials had to wait until AI-generated CSAM was uploaded online before dealing with it.

This legislation is designed to averting that issue by enabling to stop the creation of those materials at source.

Legislative Structure

The changes are being introduced by the authorities as modifications to the crime and policing bill, which is also establishing a prohibition on owning, producing or sharing AI systems designed to create child sexual abuse material.

Real-World Impact

This recently, the minister visited the London headquarters of Childline and listened to a simulated call to advisors featuring a report of AI-based abuse. The interaction portrayed a teenager seeking help after facing extortion using a sexualised AI-generated image of himself, constructed using AI.

"When I hear about children experiencing extortion online, it is a cause of intense frustration in me and rightful anger amongst families," he stated.

Alarming Statistics

A prominent online safety foundation stated that instances of AI-generated exploitation material – such as webpages that may include multiple files – had significantly increased so far this year.

Cases of category A material – the most serious form of exploitation – increased from 2,621 images or videos to 3,086.

  • Female children were overwhelmingly victimized, accounting for 94% of illegal AI depictions in 2025
  • Portrayals of newborns to toddlers rose from five in 2024 to 92 in 2025

Industry Reaction

The law change could "constitute a vital step to guarantee AI tools are secure before they are released," stated the chief executive of the online safety foundation.

"AI tools have enabled so victims can be targeted repeatedly with just a few clicks, giving criminals the capability to make potentially limitless amounts of advanced, photorealistic exploitative content," she added. "Material which additionally commodifies survivors' suffering, and makes children, especially female children, less safe both online and offline."

Support Session Information

The children's helpline also published information of counselling sessions where AI has been mentioned. AI-related harms mentioned in the sessions comprise:

  • Using AI to evaluate weight, physique and looks
  • AI assistants dissuading young people from talking to trusted guardians about harm
  • Being bullied online with AI-generated content
  • Online blackmail using AI-manipulated pictures

Between April and September this year, the helpline delivered 367 counselling interactions where AI, chatbots and associated terms were mentioned, significantly more as many as in the equivalent timeframe last year.

Fifty percent of the references of AI in the 2025 interactions were connected with psychological wellbeing and wellness, including using chatbots for assistance and AI therapy apps.

Derek Juarez
Derek Juarez

Elara Vance is a seasoned gaming journalist with a passion for exploring the latest slot games and sharing actionable advice for players.