Australia: eSafety Commissioner announced investigation into technology company responsible for AI generated nudify services used to create deepfake pornography

Description

eSafety Commissioner announced investigation into technology company responsible for AI generated nudify services used to create deepfake pornography

On 8 September 2025, the eSafety Commissioner opened an investigation into a technology company responsible for Artificial Intelligence (AI) generated nudify services used to create deepfake pornography. It was stated that the company was left unnamed by eSafety to avoid publicising it. The company runs AI-generated nude image platforms, where users can upload photos of real people, including minors, and are used to create deepfake sexual images of Australian schoolchildren. The services attract about 100’000 Australian users each month. It was highlighted that the company failed to prevent the creation of child sexual abuse material and may face fines of up to AUD 49.5 million.

Original source

Scope

Policy Area
Content moderation
Policy Instrument
Content moderation regulation
Regulated Economic Activity
platform intermediary: user-generated content, ML and AI development
Implementation Level
national
Government Branch
executive
Government Body
consumer protection authority

Complete timeline of this policy change

Hide details
2025-09-08
under deliberation

On 8 September 2025, the eSafety Commissioner opened an investigation into a technology company res…

2025-11-27
in force

On 27 November 2025, the eSafety Commissioner issued an enforcement action under the Online Safety …