On 23 February 2026, 61 Data Protection Authorities, including those from Australia, Spain, Hong Kong, New Zealand, Korea, Singapore, Switzerland, and the European Data Protection Board, adopted a joint statement raising concerns about Artificial Intelligence (AI) systems that generate realistic images and videos of identifiable individuals without consent. The statement focuses on organisations that develop or use AI content-generation systems. It notes that such tools enable non-consensual intimate imagery, defamatory content, and serious harms to children and other vulnerable groups. It reminds organisations that AI systems must comply with existing privacy and data protection laws. It notes that creating non-consensual intimate imagery may constitute a criminal offence in many jurisdictions. The statement calls for robust safeguards, meaningful transparency, fast and accessible content removal mechanisms, and enhanced protections for children. It also highlighted that regulators are committed to coordinated action through enforcement, policy, and education.
Original source