European Union: Council of Europe’s Committee on Artificial Intelligence adopted HUDERIA Methodology for risk and impact assessment of AI systems under Framework Convention on Artificial Intelligence, Human Rights, Democracy and the Rule of Law

Description

Council of Europe’s Committee on Artificial Intelligence adopted HUDERIA Methodology for risk and impact assessment of AI systems under Framework Convention on Artificial Intelligence, Human Rights, Democracy and the Rule of Law

On 28 November 2024, the Council of Europe’s Committee on Artificial Intelligence (CAI) adopted the HUDERIA Methodology, a non-binding guidance for the risk and impact assessment of Artificial Intelligence (AI) systems under the Framework Convention on Artificial Intelligence, Human Rights, Democracy and the Rule of Law. The methodology is designed for both public and private actors involved in AI development and deployment. HUDERIA offers a structured approach centred on four elements, including context-based risk analysis, stakeholder engagement, risk and impact assessment, and mitigation planning. It aligns international human rights standards with existing technical risk frameworks, while allowing parties to the Framework Convention to adapt its principles according to their laws.

Original source

Scope

Policy Area
Design and testing standards
Policy Instrument
Testing requirement
Regulated Economic Activity
ML and AI development
Implementation Level
supranational
Government Branch
executive
Government Body
central government

Complete timeline of this policy change

Hide details
2024-11-28
under deliberation

On 28 November 2024, the Council of Europe’s Committee on Artificial Intelligence (CAI) adopted the…

2025-06-16
adopted

On 16 June 2025, the Council of Europe launched the HUDERIA Process, introducing the HUDERIA Academ…