On 4 February 2026, the United Nations Children’s Fund (UNICEF) issued a statement on AI-generated sexualised images of children. The statement notes the rapid rise in the volume of AI-generated sexualised images, including deepfakes produced through nudification, where artificial intelligence tools strip or alter clothing to create fabricated nude or sexualised images. It cites evidence from a study by UNICEF, End Child Prostitution, Child Pornography and Trafficking of Children for Sexual Purposes (ECPAT), and the International Criminal Police Organization (ICPO–INTERPOL) across 11 countries. The study indicates that at least 1.2 million children disclosed that their images had been manipulated into sexually explicit deepfakes in the past year. The statement classifies AI-generated sexualised images of children as child sexual abuse material (CSAM). UNICEF calls on all governments to expand definitions of CSAM to include AI-generated content and criminalise its creation, procurement, possession, and distribution. It further urges AI developers to implement safety-by-design approaches and digital companies to prevent the circulation of such material through strengthened content moderation and investment in detection technologies.
Original source