On 16 October 2024, the Government of the Netherlands adopted a guide on European Artificial Intelligence (AI) Regulation addressed to entrepreneurs and organisations involved in machine learning and AI development. The guide details the rules for responsible AI development and usage to safeguard public safety, health, and fundamental rights. The guide proposes steps for compliance with AI regulation, including a risk assessment, in order to determine if the AI system falls under prohibited, high-risk, or other categories. It further proposes to determine if the AI qualifies as AI under the regulation and if the organisation is seen as an AI provider or user. Prohibited AI includes practices with unacceptable risks, such as manipulating behaviour, exploiting vulnerabilities, social scoring, and certain biometric identification uses. High-risk includes systems impacting health, safety, or fundamental rights, requiring compliance with strict criteria before deployment. The guide outlines the phased application of the regulation, set to be fully applicable by mid-2027, with certain AI systems facing restrictions from February 2025.
Original source