On 28 November 2024, the Committee on Artificial Intelligence (CAI) of the Council of Europe adopted the HUDERIA Methodology. The HUDERIA provides structured, high-level guidance on the risk and impact assessment of artificial intelligence systems from the perspective of human rights, democracy and the rule of law. The Methodology originates from the work of the Ad Hoc Committee on Artificial Intelligence (CAHAI) (2019–2021), whose Policy Development Group mandated the Alan Turing Institute, the UK’s national institute for data science and AI, to prepare a proposal operationalising a model for a human rights, democracy and the rule of law impact assessment. The HUDERIA Methodology consists of four elements: the Context-Based Risk Analysis (COBRA), the Stakeholder Engagement Process (SEP), the Risk and Impact Assessment (RIA), and the Mitigation Plan (MP). The HUDERIA is a stand-alone, non-legally binding guidance that does not have legal effect. It is intended for use by public and private actors throughout the lifecycle of AI systems.
Original source