On 4 September 2025, the Children Harmed by AI Technology Act (CHAT Act/SB 2714), including minor account protections, was introduced in the Senate. The Act requires covered entities, those that own, operate, or provide companion AI chatbots in the United States, to verify the age of new users through reliable commercial methods. Where the user is a minor, the account must be linked to a verified parental account, parental consent must be obtained before use, parents must be notified immediately if suicidal ideation is detected, and minors must be prevented from accessing chatbots that engage in sexually explicit communication. Chatbot interactions must be monitored for signs of suicidal ideation, with appropriate resources, including contact information for the National Suicide Prevention Lifeline, provided to the user and the parent when relevant. Finally, at the beginning of any interaction, and at least every sixty minutes during it, users must be clearly reminded that they are communicating with an AI rather than a human.
Original source