Operationalising the OECD AI Principle 1.4
Our analysis compares AI rules across the globe, analysing the implementation of the OECD AI Principle 1.4 with unprecedented detail.
The safety risks brought by AI systems are a salient and shared regulatory concern. In a rare display of international alignment, governments - including China, the EU, and the US - jointly discussed AI safety and issued the Bletchley Declaration in November 2023. At the following AI Seoul Summit, certain countries signed a declaration to address severe AI risks, a declaration for safe, innovative, and inclusive AI, and a statement of intent toward international cooperation on AI safety science. On the national level, however, regulatory approaches to address AI safety diverge.
The OECD AI Principle 1.4 demands that AI systems should be robust, secure and safe throughout their entire lifecycle. AI actors should establish mechanisms to ensure that AI systems that risk causing undue harm or exhibit undesired behaviour can be overridden, repaired, or decommissioned. In national AI rules, a patchwork of regulatory requirements implements the OECD AI Principle 1.4. The heatmap visualises divergence within these requirements. Our analysis explains each requirement in detail.
The patchwork of regulatory requirements that implement OECD AI Principle 1.4 is only the tip of the iceberg. Granular differences emerge even within the jurisdictions that impose the same regulatory requirements. To showcase granular divergence, we provide a detailed comparative analysis of the following requirements:
To provide a common language for international AI rules, we analyse rulebooks that differ in their legal nature and current lifecycle stage. For China, we analyse the regulations on generative AI (“GAI”), deep synthesis services (“DS”) and recommendation algorithms (“RA”). For the United States, we feature the Blueprint for an AI Bill of Rights (“BoR”), the Executive Order on AI (“EO”), and the NIST Risk Management Framework (“NIST RMF”).