On 2 June 2024, the US National Institute of Standards and Technology (NIST) closes its public consultation on the report Reducing Risks Posed by Synthetic Content (NIST AI 100-4). This report evaluates existing and potential science-backed standards, tools, methods, and practices for managing synthetic content, including authentication, provenance tracking, and labeling techniques such as watermarking. Furthermore, the report addresses the detection of synthetic content, the prevention of harmful outputs such as child sexual abuse material and non-consensual intimate imagery by generative AI, and outlines the testing and auditing processes necessary to maintain content integrity. Additionally, the report describes commercially available technical approaches and those under exploration, highlighting the advantages and challenges associated with each. The report emphasises that while these approaches promise to enhance trust by clearly indicating where AI has been used to generate or modify content, they also have significant limitations, both technical and social. Overall, the report aims to serve as a resource to enhance understanding and lay the groundwork for developing improved technical approaches to advance synthetic content provenance, detection, labelling, and authentication.
Original source