Subscribe to regular updates:

Share

AI Accountability

Operationalising the OECD AI Principle 1.5

Report Image

Our analysis compares AI rules across the globe, analysing the implementation of the OECD AI Principle 1.5 with unprecedented detail.

Authors

Tommaso Giardini, Anna Pagnacco, Philine Jenzer, Nicolà Seeli, Gian-Marc Perren

Date Published

31 Jul 2024

Holding AI actors accountable for the impact of their AI systems is a goal that brings together governments across the globe. Regulators concerned with the risks of AI systems and the unforeseen consequences of AI’s permeation into all economic sectors pursue accountability. The regulatory requirements imposed on AI actors, however, vary significantly across borders.

A patchwork of regulatory requirements implements OECD AI Principle 1.5

The OECD AI Principle 1.5 demands that AI actors should be accountable for the proper functioning of AI systems and for the respect of the OECD AI Principles. In the 2024 update of the principles, the OECD specified that AI actors should 1) ensure traceability to enable analysis of the AI system’s outputs and 2) apply systematic risk management throughout the AI system lifecycle. In national AI rules, a patchwork of regulatory requirements implements the OECD AI Principle 1.5. The heatmap visualises divergence within a selection of these requirements. Our analysis explains each requirement in detail. 

The patchwork of regulatory requirements that implement OECD AI Principle 1.5 are only the tip of the iceberg. Granular differences emerge even within the jurisdictions that impose the same regulatory requirements. To showcase granular divergence, we provide a detailed comparative analysis of the following requirements:

  •  Data composition
  • Regulatory cooperation
  • Risk and impact assessment
  • Risk management
  • Performance monitoring
1

To provide a common language for international AI rules, we analyse rulebooks that differ in their legal nature and current lifecycle stage. For China, we analyse the regulations on generative AI (“GAI”), deep synthesis services (“DS”) and recommendation algorithms (“RA”). For the United States, we feature the Blueprint for an AI Bill of Rights (“BoR”), the Executive Order on AI (“EO”), and the NIST Risk Management Framework (“NIST RMF”). 

We use cookies and other technologies to perform analytics on our website. By opting in, you consent to the use by us and our third-party partners of cookies and data gathered from your use of our platform. See our Privacy Policy to learn more about the use of data and your rights.