British Technology Companies and Child Protection Agencies to Test AI's Ability to Generate Abuse Images

Tech firms and child protection organizations will be granted authority to evaluate whether artificial intelligence tools can produce child abuse material under recently introduced UK legislation.

Significant Increase in AI-Generated Illegal Content

The declaration came as findings from a safety watchdog showing that cases of AI-generated CSAM have increased dramatically in the last twelve months, growing from 199 in 2024 to 426 in 2025.

Updated Regulatory Structure

Under the changes, the authorities will allow designated AI companies and child protection organizations to examine AI systems – the foundational systems for conversational AI and image generators – and ensure they have sufficient protective measures to prevent them from producing images of child sexual abuse.

"Fundamentally about preventing exploitation before it occurs," stated Kanishka Narayan, adding: "Specialists, under strict protocols, can now identify the danger in AI models early."

Addressing Legal Obstacles

The changes have been introduced because it is against the law to produce and possess CSAM, meaning that AI developers and other parties cannot generate such images as part of a evaluation regime. Until now, officials had to delay action until AI-generated CSAM was uploaded online before addressing it.

This legislation is designed to averting that problem by enabling to stop the production of those materials at their origin.

Legislative Structure

The amendments are being added by the authorities as revisions to the crime and policing bill, which is also establishing a prohibition on owning, creating or sharing AI systems designed to generate exploitative content.

Real-World Impact

This recently, the official toured the London headquarters of a children's helpline and heard a simulated conversation to advisors involving a account of AI-based exploitation. The interaction portrayed a adolescent seeking help after being blackmailed using a sexualised AI-generated image of himself, constructed using AI.

"When I hear about children experiencing blackmail online, it is a source of extreme frustration in me and rightful concern amongst parents," he said.

Alarming Data

A prominent internet monitoring foundation stated that instances of AI-generated exploitation material – such as online pages that may contain multiple images – had significantly increased so far this year.

Instances of category A material – the gravest form of abuse – rose from 2,621 images or videos to 3,086.

  • Female children were overwhelmingly victimized, accounting for 94% of illegal AI images in 2025
  • Portrayals of newborns to two-year-olds rose from five in 2024 to 92 in 2025

Sector Reaction

The law change could "represent a vital step to ensure AI tools are safe before they are launched," commented the chief executive of the internet monitoring foundation.

"AI tools have enabled so survivors can be victimised repeatedly with just a simple actions, providing criminals the ability to create possibly limitless quantities of sophisticated, photorealistic child sexual abuse material," she continued. "Content which additionally commodifies victims' trauma, and makes children, especially girls, less safe on and off line."

Counseling Interaction Information

Childline also released information of counselling interactions where AI has been referenced. AI-related harms mentioned in the conversations include:

  • Using AI to evaluate body size, physique and looks
  • AI assistants dissuading children from consulting trusted adults about abuse
  • Facing harassment online with AI-generated material
  • Digital extortion using AI-faked pictures

During April and September this year, the helpline delivered 367 counselling interactions where AI, chatbots and associated topics were discussed, significantly more as many as in the equivalent timeframe last year.

Fifty percent of the references of AI in the 2025 sessions were connected with mental health and wellness, encompassing using chatbots for assistance and AI therapy applications.

Kevin Molina
Kevin Molina

A tech enthusiast and gaming analyst with a passion for exploring cutting-edge digital experiences and sharing actionable insights.