The National Institute of Standards and Technology (NIST) closed its consultation on the draft of a Concept Paper for AI Risk Management Framework (AI RMF). Specifically, the framework should be used voluntarily by developers, users and regulators of AI systems and aims to tackle the risks arising in the development and evaluation of AI systems. Moreover, the framework is organized to be employed by the greatest number of individuals and organizations. NIST will publish the initial draft in early 2022, with the goal to release the complete framework in early 2023. The consultation was opened on 14 December 2021.
Original source