On 22 July 2025, the National Commission on Informatics and Liberty (CNIL) adopted the recommendations on the application of the General Data Protection Regulation (GDPR) to the development of artificial intelligence (AI) systems. The recommendations clarify that AI models trained on personal data are often subject to data protection rules due to their memorisation risks. The guidance applies to AI developers, providers, and deployers across sectors, including health, education, and workplaces, requiring clear purpose definition, assignment of responsibilities as controller or processor. It also focuses on lawful basis for data processing, including consent, contract, or legitimate interest. It emphasises data minimisation, retention limits, privacy-by-design approaches including federated learning, homomorphic encryption and robust re-identification risk testing, especially for web-scraped or sensitive data. The CNIL urges transparent communication and mechanisms to enforce rights on access, rectification, erasure, objection, secure data handling, compliant annotation, and retraining or output filtering to mitigate memorisation. Data Protection Impact Assessments (DPIAs) must be performed for high-risk processing, addressing AI-specific risks like bias and data leakage.
Original source