The National Institute of Standards and Technology (NIST) opened a consultation on the draft of a Concept Paper for AI Risk Management Framework (AI RMF). Specifically, the framework should be used voluntarily by developers, users and regulators of AI systems and aims to tackle the risks arising in the development and evaluation of AI systems. Moreover, the framework is organized to be employed by the greatest number of individuals and organizations. NIST will publish the initial draft in early 2022, with the goal to release the complete framework in early 2023. The consultation will be closed on 25 January 2022.
Original source