On 28 October 2025, the Guidelines for User Age-verification and Responsible Dialogue Act (GUARD Act/ SB 3062) was introduced in the United States Senate. The Act requires all AI chatbot providers to verify users’ ages before allowing access. Existing accounts must be frozen until users provide verifiable age data, while new users must go through an age verification process when creating an account. Providers must regularly re-verify users’ ages and may use third parties to perform verification. Data collected for verification must be minimal, encrypted, kept only as long as necessary, and never sold or shared. Chatbots must clearly disclose that they are artificial intelligence systems and not human, at the start of each conversation and at regular intervals. They are prohibited from claiming to be professionals or offering legal, financial, medical, or psychological advice, and must remind users to seek licensed professionals for such services. Minors identified through verification are prohibited from using AI companions. The Attorney General may enforce the Act through civil actions, injunctions, and fines of up to USD 100’000 per violation, and State Attorneys General may also bring cases to protect residents.
Original source