On 22 January 2026, the Children Harmed by AI Technology Act (HR 7218), including data protection regulations, was introduced in the House of Representatives. The Act applies to any person or organisation ("covered entity") that owns, operates, or makes available a "companion AI chatbot" to users in the United States. It explicitly excludes bots used for customer service, business operations, video games (with limited scope), and standard voice-activated virtual assistants. A covered entity must strictly limit the collection, use, and storage of a user's age verification data only to the purposes of verifying age, obtaining parental consent, or maintaining compliance records, thereby ensuring the information's confidentiality. The Act will be enforced primarily by the Federal Trade Commission (FTC), which must issue compliance guidance within 180 days and can treat violations as unfair or deceptive practices under the FTC Act. State attorneys general may also bring civil enforcement actions on behalf of their residents, subject to notifying the FTC. The FTC maintains the right to transfer the filing to a different court, be heard on the matter, and file appeals. A "safe harbour" provision protects covered entities from liability if they act in good faith by relying on user-provided age information, following FTC guidance, and conforming to accepted industry standards for age verification. The law will take effect one year after its enactment.
Original source