On 15 April 2026, the Ministry of Digital Development closes the public consultation on the draft Law on Fundamentals of State Regulation of the Fields of Application of Artificial Intelligence Technologies opened on 18 March 2026. The draft would require artificial intelligence model developers to exclude functional features capable of leading to discrimination on the basis of behaviour or personal characteristics, to document the architecture, functional logic, and limitations of artificial intelligence models, and to conduct modelling of potential risks associated with the functioning of the artificial intelligence technologies being developed. Artificial intelligence system operators would be required to include a safe operation manual in system documentation, prohibiting the use of the system to manipulate behaviour and exploit human vulnerabilities. The draft defines exploitation of human vulnerabilities as the use of characteristics of a natural person or group of persons to deliberately influence behaviour, decision-making, or obtain unauthorised access to information. The draft establishes a risk-based approach to regulation, requiring assessment of the purpose of artificial intelligence technologies, the probability and scale of risks of harm, the degree of autonomy of artificial intelligence systems, and the degree of influence of artificial intelligence systems on legally significant actions. The draft defines large foundational models as artificial intelligence models trained to recognise certain types of patterns and applied to perform a large number of different tasks, with minimum parameter thresholds to be established by the authorised body in the field of artificial intelligence. Requirements for computing infrastructure, including data processing centres and supercomputers, would be established by the Government of the Russian Federation.
Original source