On 4 September 2025, the Children Harmed by AI Technology Act (CHAT Act/SB 2714) was introduced in the Senate. Covered entities, defined as entities that own, operate, or otherwise make available a companion AI chatbot to individuals in the United States, must verify the age of individuals creating new accounts by requesting age information and confirming it through a reliable commercial method. If the user is a minor, the account must be linked to a verified parental account, parental consent must be obtained before use, parents must be notified immediately if the chatbot detects suicidal ideation, and minors must be blocked from accessing chatbots that engage in sexually explicit communication. Age verification data may only be collected, processed, and stored to the extent strictly necessary for verification, parental consent, or compliance purposes. Chatbot interactions must be monitored for signs of suicidal ideation, with appropriate resources, including contact information for the National Suicide Prevention Lifeline, provided to the user and the parent when relevant.
Original source