Paedophiles Used AI To Generate Over 3,000 Child Abuse Videos In 2025, Shocking Report Shows

Published : Jan 17, 2026, 05:26 PM IST
child abuse

Synopsis

A shocking new report revealed that Paedophiles exploited artificial intelligence (AI) to industrialise abuse and more than 3,000 AI-generated child sexual abuse videos were created in 2025 alone.

A shocking new report revealed that Paedophiles exploited artificial intelligence (AI) to industrialise abuse and more than 3,000 AI-generated child sexual abuse videos were created in 2025 alone. An analysis by the Internet Watch Foundation (IWF) shows that last year was the worst on record for AI-generated child sex abuse material, recording 26,362 per cent increase in photo-realistic AI abuse videos, uncovering 3,440 videos in 2025, compared with just 13 in 2024.

65 per cent of the videos fell under Category A, the most extreme classification of abuse, which includes acts such as penetration, bestiality and sexual torture.

Kerry Smith, Chief Executive of the IWF, warned that the technology is handing unprecedented power to criminals. “Our analysts work tirelessly to get this imagery removed to give victims some hope. But now AI has moved on to such an extent, criminals essentially can have their own child sexual abuse machines to make whatever they want to see," she said.

The IWF says the explosive rise in AI-generated abuse has driven its worst-ever year for online child sexual abuse material. In 2025, analysts took action on 312,030 reports where child sexual abuse material was confirmed - a 7% increase on the 291,730 cases recorded in 2024.

While AI tools for generating abuse content are not new, the past year has seen criminals dramatically refine the technology, producing more extreme material at far greater speed. The IWF now warns that such content can be created at scale by offenders with minimal technical expertise.

Smith added, “The frightening rise in extreme Category A videos of AI–generated child sexual abuse shows the kind of things criminals want. And it is dangerous. Easy availability of this material will only embolden those with a sexual interest in children, fuel its commercialisation, and further endanger children both on and offline."

Crucially, the IWF stresses that the label “AI-generated” does not mean no children were harmed. In many cases, the material is based on the likenesses of real children known to the abuser. These images may be directly depicted in abuse videos or used to “train” AI systems.

In 2024, Hugh Nelson, then 27, was sentenced to 18 years in prison for using AI to manipulate photographs of real children into sexual abuse images. The court found that the paying customers who supplied the photos were largely fathers, uncles, family friends or neighbours of the victims.

Jamie Hurworth, an Online Safety Act expert and dispute resolution lawyer at Payne Hicks Beach, said the law must leave no room for ambiguity. He said: “The use of generative AI to create child sexual abuse material should not be a legal grey area. It is sexual exploitation, regardless of whether the images are ‘synthetic’.

‘What this news shows is the scale at which AI can turbo–charge harm if effective safeguards are not built in and enforced.’”

Elon Musk's Grok under fire

The revelations come amid mounting pressure on tech companies to rein in AI misuse. Elon Musk recently moved to restrict his Grok AI after users on X flooded the platform with non-consensual sexualised images, including manipulated images of children and adults altered to appear as children.

Ashley St Clair, the mother of one of Musk’s sons, is now suing X over AI-generated images that court filings say included a depiction of her as a 14-year-old wearing a string bikini. Musk had previously defended the platform, claiming critics “just want to suppress free speech”, and even shared AI-generated images of Prime Minister Sir Keir Starmer in a bikini.

On Wednesday, X said it had “implemented technological measures to prevent the Grok account from allowing the editing of images of real people in revealing clothing”. While Ofcom described the move as a “welcome development”, it confirmed that its investigation into whether X breached UK law “remains ongoing”. Reports suggest the standalone Grok Imagine app can still generate nude images that may be shared on the platform.

The IWF has warned that current legislation makes it extremely difficult for authorities to test whether AI tools can be misused, as doing so risks creating illegal imagery. Proposed rules unveiled in November would allow designated bodies such as the IWF, alongside AI developers, to scrutinise models to ensure they cannot generate sexual or nude images of children.

In December, the government also announced plans to ban AI “nudify” apps that digitally remove clothing from images.

Tech secretary Liz Kendall said: “It is utterly abhorrent that AI is being used to target women and girls in this way. We will not tolerate this technology being weaponised to cause harm, which is why I have accelerated our action to bring into force a ban on the creation of non–consensual AI–generated intimate images."

PREV
Read more Articles on

Recommended Stories

Honeybees Hold The Key? How Maths Could Help Humans Talk To Aliens
At 40, Not Youth, Is When Men Are The Most Sexually Driven, Study Finds