Description

Adopted Guide to AI Regulation

On 16 October 2024, the Government of the Netherlands adopted a guide on European Artificial Intelligence (AI) Regulation addressed to entrepreneurs and organisations involved in machine learning and AI development. The guide details the rules for responsible AI development and usage to safeguard public safety, health, and fundamental rights. The guide proposes steps for compliance with AI regulation, including a risk assessment, in order to determine if the AI system falls under prohibited, high-risk, or other categories. It further proposes to determine if the AI qualifies as AI under the regulation and if the organisation is seen as an AI provider or user. Prohibited AI includes practices with unacceptable risks, such as manipulating behaviour, exploiting vulnerabilities, social scoring, and certain biometric identification uses. High-risk includes systems impacting health, safety, or fundamental rights, requiring compliance with strict criteria before deployment. The guide outlines the phased application of the regulation, set to be fully applicable by mid-2027, with certain AI systems facing restrictions from February 2025.

Original source

Scope

Policy Area
Design and testing standards
Policy Instrument
Design requirement
Regulated Economic Activity
ML and AI development
Implementation Level
national
Government Branch
executive
Government Body
central government

Complete timeline of this policy change

Hide details
2024-10-16
adopted

On 16 October 2024, the Government of the Netherlands adopted a guide on European Artificial Intell…

We use cookies and other technologies to perform analytics on our website. By opting in, you consent to the use by us and our third-party partners of cookies and data gathered from your use of our platform. See our Privacy Policy to learn more about the use of data and your rights.