Subscribe to regular updates:

Share

AI Robustness, Security and Safety

Operationalising the OECD AI Principle 1.4

Report Image

Our analysis compares AI rules across the globe, analysing the implementation of the OECD AI Principle 1.4 with unprecedented detail.

Authors

Tommaso Giardini, Philine Jenzer, Anna Pagnacco

Date Published

24 Jul 2024

The safety risks brought by AI systems are a salient and shared regulatory concern. In a rare display of international alignment, governments - including China, the EU, and the US - jointly discussed AI safety and issued the Bletchley Declaration in November 2023. At the following AI Seoul Summit, certain countries signed a declaration to address severe AI risks, a declaration for safe, innovative, and inclusive AI, and a statement of intent toward international cooperation on AI safety science. On the national level, however, regulatory approaches to address AI safety diverge.

A patchwork of regulatory requirements implements OECD AI Principle 1.4

The OECD AI Principle 1.4 demands that AI systems should be robust, secure and safe throughout their entire lifecycle. AI actors should establish mechanisms to ensure that AI systems that risk causing undue harm or exhibit undesired behaviour can be overridden, repaired, or decommissioned. In national AI rules, a patchwork of regulatory requirements implements the OECD AI Principle 1.4. The heatmap visualises divergence within these requirements. Our analysis explains each requirement in detail. 

The patchwork of regulatory requirements that implement OECD AI Principle 1.4 is only the tip of the iceberg. Granular differences emerge even within the jurisdictions that impose the same regulatory requirements. To showcase granular divergence, we provide a detailed comparative analysis of the following requirements:

  • System safety
  • Data security
  • Registration, authorisation, and licensing
  • Prohibition
  • Testing
1

To provide a common language for international AI rules, we analyse rulebooks that differ in their legal nature and current lifecycle stage. For China, we analyse the regulations on generative AI (“GAI”), deep synthesis services (“DS”) and recommendation algorithms (“RA”). For the United States, we feature the Blueprint for an AI Bill of Rights (“BoR”), the Executive Order on AI (“EO”), and the NIST Risk Management Framework (“NIST RMF”). 

We use cookies and other technologies to perform analytics on our website. By opting in, you consent to the use by us and our third-party partners of cookies and data gathered from your use of our platform. See our Privacy Policy to learn more about the use of data and your rights.