On 11 July 2025, the Office of Communications published a discussion paper on attribution measures to combat harmful deepfakes. The paper focuses on digital platforms and Artificial Intelligence (AI) developers involved in the creation, distribution, and moderation of synthetic audio-visual content, including deepfakes that pose risks of harm such as fraud, defamation, and disinformation. The attribution toolkit examines four attribution measures, including watermarking, provenance metadata, AI labels, and context annotations, designed to tackle harmful deepfakes on digital platforms. It highlights how these tools can help users identify misleading content and aid platform moderation. It also noted challenges, including removal risks, user misinterpretation, and inconsistent standards. The paper highlighted the importance of combining these measures with other interventions like AI classifiers and red teaming of AI models.
Original source