On 28 February 2025, the Communications Commission adopted the Guidelines for User Protection of Generative AI Services. The guidelines specify that AI design must be focused on creating systems that are transparent, explainable, and aligned with ethical principles. Developers must design AI systems that users can understand and trust, providing clear explanations of how decisions are made and ensuring that the system's behaviour is predictable and controllable. This includes incorporating user-centric design principles, conducting ethical impact assessments, and continuously evaluating the system's performance to address any emerging issues, thereby promoting responsible AI innovation. According to the guideline, many AI systems, particularly those using deep learning, have complex mechanisms that obscure how decisions or content are generated, leading to trust and accountability issues known as the "black box" problem. To address this, service providers should inform users about AI-generated content, provide accessible explanations of the decision-making process, and offer attribution or sources when possible.
Original source