Subscribe to regular updates:

Share

AI, Human Rights, and Democratic Values

Operationalising the OECD AI Principle 1.2

Report Image

Our analysis compares AI rules across the globe, analysing the implementation of the OECD AI Principle 1.2 with unprecedented detail.

Authors

Tommaso Giardini, Nora Fischer, Anna Pagnacco

Date Published

10 Jul 2024

As AI permeates into all areas of life, governments around the world worry about the erosion of human values. Ceding human agency to AI can both create new problems and exacerbate existing problems – from algorithmic discrimination, to AI privacy breaches, to AI-generated misinformation. This flurry of regulatory concerns has led different governments to similarly demand respect for the rule of law, human rights, and democratic values. Governments differ, however, in the regulatory requirements they choose to impose in pursuit of this shared goal. 

A patchwork of regulatory requirements implements OECD AI Principle 1.2

The OECD AI Principle 1.2 demands that AI actors should respect the rule of law, human rights, as well as democratic and human-centred values throughout the AI system lifecycle. The principle specifically lists non-discrimination, freedom, dignity, autonomy, privacy, diversity, fairness, social justice, and labour rights. In addition, actors should address AI’s amplification of misinformation while respecting freedom of expression. To pursue this goal, AI actors should implement safeguards, such as human oversight, and also address risks arising from uses outside of intended purpose and un-/intentional misuse.

In national AI rules, a patchwork of regulatory requirements implements the OECD AI Principle 1.2. The heatmap visualises divergence within a selection of these requirements. Our analysis explains each requirement in detail. 

The patchwork of regulatory requirements that implement OECD AI Principle 1.2 is only the tip of the iceberg. Granular differences emerge even within the jurisdictions that impose the same regulatory requirements. To showcase granular divergence, we provide a detailed comparative analysis of the following requirements:

  • Non-discrimination
  • Content moderation
  • Data protection
  • Human oversight
  • Interaction rights
1

To provide a common language for international AI rules, we analyse rulebooks that differ in their legal nature and current lifecycle stage. For China, we analyse the regulations on generative AI (“GAI”), deep synthesis services (“DS”) and recommendation algorithms (“RA”). For the United States, we feature the Blueprint for an AI Bill of Rights (“BoR”), the Executive Order on AI (“EO”), and the NIST Risk Management Framework (“NIST RMF”). 

We use cookies and other technologies to perform analytics on our website. By opting in, you consent to the use by us and our third-party partners of cookies and data gathered from your use of our platform. See our Privacy Policy to learn more about the use of data and your rights.