Description

Committee on Artificial Intelligence of the Council of Europe adopted HUDERIA Methodology

On 28 November 2024, the Committee on Artificial Intelligence (CAI) of the Council of Europe adopted the HUDERIA Methodology. The HUDERIA provides structured, high-level guidance on the risk and impact assessment of artificial intelligence systems from the perspective of human rights, democracy and the rule of law. The Methodology originates from the work of the Ad Hoc Committee on Artificial Intelligence (CAHAI) (2019–2021), whose Policy Development Group mandated the Alan Turing Institute, the UK’s national institute for data science and AI, to prepare a proposal operationalising a model for a human rights, democracy and the rule of law impact assessment. The HUDERIA Methodology consists of four elements: the Context-Based Risk Analysis (COBRA), the Stakeholder Engagement Process (SEP), the Risk and Impact Assessment (RIA), and the Mitigation Plan (MP). The HUDERIA is a stand-alone, non-legally binding guidance that does not have legal effect. It is intended for use by public and private actors throughout the lifecycle of AI systems.

Original source

Scope

Policy Area
Design and testing standards
Policy Instrument
Artificial Intelligence authority governance
Regulated Economic Activity
ML and AI development
Implementation Level
supranational
Government Branch
executive
Government Body
other regulatory body

Complete timeline of this policy change

Hide details
2024-11-28
adopted

On 28 November 2024, the Committee on Artificial Intelligence (CAI) of the Council of Europe adopte…

2026-02-25
adopted

On 25 February 2026, the Committee of Ministers of the Council of Europe approved the HUDERIA Model…