Tech firms and child protection organizations will be granted permission to evaluate whether AI tools can produce child abuse material under recently introduced UK legislation.
The declaration coincided with findings from a protection monitoring body showing that cases of AI-generated child sexual abuse material have more than doubled in the past year, growing from 199 in 2024 to 426 in 2025.
Under the amendments, the government will allow approved AI companies and child safety organizations to examine AI models ā the underlying technology for conversational AI and visual AI tools ā and verify they have adequate protective measures to prevent them from producing depictions of child exploitation.
"Ultimately about stopping exploitation before it occurs," stated the minister for AI and online safety, adding: "Specialists, under strict conditions, can now detect the risk in AI systems early."
The amendments have been implemented because it is against the law to create and possess CSAM, meaning that AI developers and other parties cannot create such images as part of a evaluation process. Previously, officials had to wait until AI-generated CSAM was uploaded online before addressing it.
This legislation is aimed at averting that problem by enabling to stop the creation of those images at source.
The amendments are being added by the authorities as modifications to the criminal justice legislation, which is also implementing a prohibition on possessing, producing or sharing AI models developed to create exploitative content.
This recently, the official toured the London headquarters of Childline and listened to a simulated call to counsellors featuring a account of AI-based abuse. The call depicted a adolescent seeking help after being blackmailed using a explicit deepfake of themselves, created using AI.
"When I hear about young people experiencing blackmail online, it is a source of intense frustration in me and rightful concern amongst parents," he said.
A leading internet monitoring foundation stated that instances of AI-generated abuse material ā such as online pages that may contain numerous images ā had significantly increased so far this year.
Cases of category A content ā the most serious form of abuse ā increased from 2,621 visual files to 3,086.
The legislative amendment could "constitute a crucial step to guarantee AI tools are secure before they are launched," commented the chief executive of the online safety organization.
"Artificial intelligence systems have made it so victims can be victimised all over again with just a simple actions, giving offenders the capability to create possibly endless amounts of advanced, lifelike child sexual abuse material," she continued. "Content which further exploits victims' suffering, and renders young people, especially girls, less safe on and off line."
The children's helpline also released details of support interactions where AI has been referenced. AI-related risks discussed in the conversations comprise:
Between April and September this year, the helpline delivered 367 support interactions where AI, conversational AI and related topics were mentioned, significantly more as many as in the same period last year.
Half of the mentions of AI in the 2025 interactions were connected with mental health and wellbeing, including using chatbots for assistance and AI therapy apps.
A passionate gamer and strategy expert with years of experience in competitive gaming and content creation.