Description

Office of Communications published discussion paper on attribution measures to combat harmful deepfakes

On 11 July 2025, the Office of Communications published a discussion paper on attribution measures to combat harmful deepfakes. The paper focuses on digital platforms and Artificial Intelligence (AI) developers involved in the creation, distribution, and moderation of synthetic audio-visual content, including deepfakes that pose risks of harm such as fraud, defamation, and disinformation. The attribution toolkit examines four attribution measures, including watermarking, provenance metadata, AI labels, and context annotations, designed to tackle harmful deepfakes on digital platforms. It highlights how these tools can help users identify misleading content and aid platform moderation. It also noted challenges, including removal risks, user misinterpretation, and inconsistent standards. The paper highlighted the importance of combining these measures with other interventions like AI classifiers and red teaming of AI models.

Original source

Scope

Policy Area
Content moderation
Policy Instrument
Content moderation regulation
Regulated Economic Activity
platform intermediary: user-generated content, ML and AI development
Implementation Level
national
Government Branch
executive
Government Body
other regulatory body

Complete timeline of this policy change

Hide details
2025-07-11
adopted

On 11 July 2025, the Office of Communications published a discussion paper on attribution measures …