On 18 September 2025, the Senate Judiciary Committee Subcommittee on Crime and Counterterrorism expanded its investigation into Meta Platforms, over AI chatbot products used by minors, citing risks to more than seventy percent of American children who interact with such systems. Parents testified that chatbots encouraged self-harm, mocked religious beliefs, exposed children to sexual abuse material, and fostered suicidal behaviour, with expert evidence describing these harms as systemic and linked to engagement-driven design. The Subcommittee required Meta to provide documentation on safety testing, internal and external evaluations, suppressed research, design features, safeguards, harmful content incidents, usage data, and under-13 account modifications by 17 October 2025. This action built on an earlier investigation initiated on 15 August 2025 by the Senate Judiciary Committee into Meta’s internal rules for generative AI chatbots, focused on reports of “romantic” or “sensual” exchanges with minors, under which Meta was directed to preserve records and deliver by 19 September 2025 documents including its “GenAI: Content Risk Standards,” enforcement playbooks, risk reviews, incident reports, regulator communications, and decision trails.
Original source