Subscribe to regular updates:

Share

AI Transparency and Explainability

Operationalising the OECD AI Principle 1.3

Report Image

Our analysis compares AI rules across the globe, analysing the implementation of the OECD AI Principle 1.3 with unprecedented detail.

Authors

Tommaso Giardini, Nora Fischer

Date Published

17 Jul 2024

AI raises several transparency and explainability concerns, two of which are top-of-mind for governments across the globe. First, the interaction with AI systems increasingly mimics human interaction. Second, AI systems are inherently opaque, leaving humans that interact with AI systems in the dark on the factors behind AI decisions. Despite sharing these regulatory concerns, governments choose different regulatory requirements to counter them. 

A patchwork of regulatory requirements implements OECD AI Principle 1.3

The OECD AI Principle 1.3 demands that AI actors commit to transparency and responsible disclosure regarding their AI systems. They should provide meaningful information to foster a general understanding of AI, make stakeholders aware of their interactions with AI, and provide information on the factors behind AI output.

In national AI rules, a patchwork of regulatory requirements implements the OECD AI Principle 1.3. The heatmap below visualises divergence within a selection of these requirements, grouped in three categories. Watermarking requirements directly attach to AI systems’ output. Disclosure requirements demand that AI actors actively provide information. Information rights empower users to reactively request information. Our analysis explains each requirement in detail. 

The patchwork of regulatory requirements that implement OECD AI Principle 1.3 is only the tip of the iceberg. Granular differences emerge even within the jurisdictions that impose the same regulatory 1 requirements. To showcase granular divergence, we now proceed with a detailed comparative analysis of the following requirements. Jump directly to the section that interests you:

  • Content watermarking
  • System-in-use disclosure
  • Technical disclosure
  • Information rights
1

To provide a common language for international AI rules, we analyse rulebooks that differ in their legal nature and current lifecycle stage. For China, we analyse the regulations on generative AI (“GAI”), deep synthesis services (“DS”) and recommendation algorithms (“RA”). For the United States, we feature the Blueprint for an AI Bill of Rights (“BoR”), the Executive Order on AI (“EO”), and the NIST Risk Management Framework (“NIST RMF”). 

We use cookies and other technologies to perform analytics on our website. By opting in, you consent to the use by us and our third-party partners of cookies and data gathered from your use of our platform. See our Privacy Policy to learn more about the use of data and your rights.