UNICEF has declared AI-generated sexualised images of children as child sexual abuse, urging global criminalisation. A study found 1.2M children had images manipulated into deepfakes, highlighting the urgent need for industry and legal action.
Artificial Intelligence (AI)-generated sexualised images depicting children constitute child sexual abuse and must be criminalised, UNICEF said, warning of a rapid and alarming rise in the misuse of AI tools to create abusive content.

In a statement, the UN agency responsible for providing humanitarian and developmental aid to children worldwide said it was increasingly concerned by reports showing a surge in AI-generated sexualised images, including cases where real photographs of children are manipulated and sexualised through deepfake technology.
"Sexualised images of children generated or manipulated using AI tools are child sexual abuse material. Deepfake abuse is abuse, and there is nothing fake about the harm it causes," UNICEF said.
UNICEF said deepfakes (images, videos or audio generated or altered using AI to appear real) are being used to produce sexualised content involving children, including through so-called "nudification", where AI tools digitally remove or alter clothing to fabricate nude or sexual images.
Alarming Study Reveals Widespread Abuse
Citing new evidence, UNICEF said a joint study conducted with ECPAT and INTERPOL across 11 countries found that at least 1.2 million children disclosed that their images had been manipulated into sexually explicit deepfakes over the past year. In some countries, this amounted to one in 25 children, roughly one child in a typical classroom.
The UN agency said children themselves are acutely aware of the threat. In some countries surveyed, up to two-thirds of children reported worrying that AI could be used to create fake sexual images or videos of them, highlighting the urgent need for stronger awareness, prevention and protection measures.
Normalising Exploitation and Direct Victimisation
UNICEF stressed that even when an identifiable victim is not immediately apparent, AI-generated child sexual abuse material normalises the sexual exploitation of children, fuels demand for abusive content and creates major challenges for law enforcement in identifying and protecting victims.
"When a child's image or identity is used, that child is directly victimised," the organisation said.
Urgent Call for Criminalisation and Safeguards
While welcoming steps taken by some AI developers to adopt safety-by-design approaches and implement safeguards to prevent misuse, UNICEF said protections across the industry remain uneven. It warned that risks are amplified when generative AI tools are embedded into social media platforms, enabling manipulated images to spread rapidly.
UNICEF called on governments worldwide to expand legal definitions of child sexual abuse material to explicitly include AI-generated content and to criminalise its creation, possession, procurement and distribution.
The agency also urged AI developers to implement robust safeguards and digital platforms to prevent the circulation of such material, rather than removing it only after abuse has occurred.
It said stronger content moderation and investment in detection technologies were essential to ensure immediate removal of abusive material.
"The harm from deepfake abuse is real and urgent," UNICEF said. "Children cannot wait for the law to catch up." (ANI)
(Except for the headline, this story has not been edited by Asianet Newsable English staff and is published from a syndicated feed.)