Operationalising the OECD AI Principle 1.5
Our analysis compares AI rules across the globe, analysing the implementation of the OECD AI Principle 1.5 with unprecedented detail.
Tommaso Giardini, Anna Pagnacco, Philine Jenzer, Nicolà Seeli, Gian-Marc Perren
31 Jul 2024
Holding AI actors accountable for the impact of their AI systems is a goal that brings together governments across the globe. Regulators concerned with the risks of AI systems and the unforeseen consequences of AI’s permeation into all economic sectors pursue accountability. The regulatory requirements imposed on AI actors, however, vary significantly across borders.
The OECD AI Principle 1.5 demands that AI actors should be accountable for the proper functioning of AI systems and for the respect of the OECD AI Principles. In the 2024 update of the principles, the OECD specified that AI actors should 1) ensure traceability to enable analysis of the AI system’s outputs and 2) apply systematic risk management throughout the AI system lifecycle. In national AI rules, a patchwork of regulatory requirements implements the OECD AI Principle 1.5. The heatmap visualises divergence within a selection of these requirements. Our analysis explains each requirement in detail.
The patchwork of regulatory requirements that implement OECD AI Principle 1.5 are only the tip of the iceberg. Granular differences emerge even within the jurisdictions that impose the same regulatory requirements. To showcase granular divergence, we provide a detailed comparative analysis of the following requirements:
To provide a common language for international AI rules, we analyse rulebooks that differ in their legal nature and current lifecycle stage. For China, we analyse the regulations on generative AI (“GAI”), deep synthesis services (“DS”) and recommendation algorithms (“RA”). For the United States, we feature the Blueprint for an AI Bill of Rights (“BoR”), the Executive Order on AI (“EO”), and the NIST Risk Management Framework (“NIST RMF”).