Every two months, we provide a roundup of global developments on online safety, structured along our common language. To support policymakers, we document which government authorities are tackling which aspect of online safety, and how,

Every two months, we provide a roundup of global developments on online safety, structured along our common language. To support policymakers, we document which government authorities are tackling which aspect of online safety, and how,
This roundup provides policymakers working on online safety with a structured overview of international developments. We summarise insights from the Digital Policy Alert’s (DPA) daily tracking of developments worldwide, structured along our common language for online safety. Every finding links to the DPA entry, providing an extensive summary and the official government source. Users can filter and download the full DPA dataset and subscribe to our tailored notification service for free.
We focus on developments between 15 August and 15 October 2025, in four areas:
Access restrictions are a strict, but increasingly common, regulatory instrument. We distinguish between bans, restricting access for all users, and age gates, restricting access for users below a certain age.
Governments applied bans to both specific products and ranges of services:
The President of the United States signed an executive order confirming that the framework agreement transferring the operation of TikTok’s US application to a new US-based joint venture is in line with the Foreign Adversary Controlled Applications Act. The Act required ByteDance to divest TikTok or face a ban. Separately, the Federal Communications Commission issued an order denying recognition to the second batch of Chinese electronics testing labs, following a previous order.
The President of India signed a law banning online money games involving stakes or deposits and imposing broad restrictions on related activities. The Ministry of Electronics and Information Technology released the draft implementing rules, establishing a game classification framework, registration scheme, and oversight authority.
A bill introduced in the Brazilian Chamber of Deputies would require online betting platforms to restrict access for certain vulnerable individuals.
The Italian Data Protection Authority temporarily restricted the ClothOff “deep nude” service from processing Italian users’ data, citing the company’s failure to provide requested information and insufficient watermarking of manipulated images, which breaches principles of fairness, accountability, and data protection by design and by default.
Vietnam’s Ministry of Science and Technology opened a consultation on the draft Artificial Intelligence Law, which bans AI practices posing unacceptable risk, including harmful manipulation, emotion recognition, and real-time biometric identification.
The Philippines National Privacy Commission banned Tools for Humanity, the entity operating the World App and Orb verification system, from processing biometric data.
Beyond bans, several governments demanded age verification to restrict access for users below a given age threshold, focusing on social media and adult content websites:
The European Commission released the second version of its EU age verification blueprint, building on the solution first introduced in July 2025. It adds support for passports, ID cards, and cross-border verification across all 27 member states as an age verification solution until the European Digital Identity Wallets become operational. The European Commission also requested information from Snapchat, YouTube, Apple, and Google on their age verification systems and how they prevent minors from accessing illegal products, such as drugs or vapes, as well as harmful content, including material promoting eating disorders.
France's Regulatory Authority for Audiovisual and Digital Communication confirmed that six designated pornographic sites complied with age verification requirements.
The UK’s Office of Communications (Ofcom) opened investigations under the Online Safety Act, assessing the compliance of 8579 LLC, AVS Group, Cyberitic, Web Prime, Youngtek Solutions, ZD Media, and XGroovy's provider with age assurance requirements to prevent minors’ access to pornographic content.
Italy implemented a resolution requiring pornographic websites and video platforms to deploy age verification systems to block minors’ access. Meanwhile, the Data Protection Authority held a hearing on the updated Senate bill on the protection of minors, which would restrict social media and video-sharing accounts to users aged 15 and above.
In the United States, several measures concerning online age verification and minors’ protection have advanced. At the federal level, the Children Harmed by AI Technology Act was introduced in the Senate to require AI chatbot providers to verify users’ ages and block minors from sexually explicit chatbot interactions. At the subnational level, Arizona implemented a law requiring online platforms distributing sexual material to verify that users are at least 18 years old. The Florida Attorney General filed a lawsuit against the adult content platforms for non-compliance with age verification requirements. Ohio implemented a law requiring commercial online entities that primarily distribute obscene or harmful material to implement age verification measures to restrict minors’ access. Finally, California’s Governor vetoed the LEAD for Kids Act, which aimed to restrict minors’ access to chatbots that could encourage self-harm, illegal activity, or sexual interactions.
Brazil’s President signed the law on protecting children and adolescents online, requiring providers to implement effective age verification to prevent minors from accessing inappropriate or restricted content, such as pornographic material. The Ministry of Justice opened a consultation on the proposal for a methodology and minimum requirements for age verification in digital services. The Ministry also proposed the establishment of an age rating labelled "not recommended for children under 6 years old". Separately, a bill was introduced to require large digital platforms and search tools to establish specific technical and procedural measures for age verification.
Norway's Ministry of Children and Family Affairs closed the consultation on the law mandating a 15-year minimum age for social media use.
In addition to access restrictions, several governments demanded age verification to establish tailored safeguards for users below age thresholds. Here, we outline one framework with safeguards spanning across online content, children’s data, and consumer protection. The sections below focus on each of these policy areas and open with developments related to children.
Australia’s eSafety Commissioner registered six new industry codes under the Online Safety Act, set to take effect in March 2026. These codes include measures to protect children from age-restricted content such as online pornography, extreme violence, and self-harm material.
The messaging services code sets compliance obligations for providers to prohibit sharing online pornography with children, maintain plain-language terms and proportionate enforcement systems, and implement in-service anonymous reporting tools. Providers must train staff, review systems annually, and offer user controls such as message blocking, group exit, and child privacy defaults.
The social media services code establishes a three-tier risk system requiring platforms to assess and mitigate children’s exposure to age-restricted content. Obligations vary by risk level, with higher tiers requiring age assurance, detection and removal systems including AI-based tools, and annual safety reviews. AI companion chatbots face additional risk assessments and additional age control measures.
The designated internet services code requires adult websites and high-risk generative AI platforms to implement effective age verification, while app stores are expected to enforce checks for 18+ apps.
Device manufacturers and operating system providers must offer child accounts with integrated safety tools and continuously enhance protections.
Previously, in June, the eSafety Commissioner approved the industry codes applicable to hosting services, internet carriage services, and internet search engine services
In addition to safeguards against harmful content, Australia is preparing for the implementation of the Social Media Minimum Age Act in December 2025. The Act prohibits users under 16 from opening accounts on social media platforms that meet age-restricted criteria.
The Minister for Communications issued the Online Safety (Age-Restricted Social Media Platforms) Rules specifying criteria for services that are not considered age-restricted. The Minister noted that the prohibition will apply to platforms such as Facebook, Instagram, Snapchat, TikTok, X, and YouTube, among other platforms.
To support its implementation, the eSafety Commissioner adopted social media minimum age regulatory guidance and released a self-assessment tool for platforms to determine whether their services qualify as age-restricted and require implementation of mandatory age verification measures.
The Information Commissioner also issued guidance on the social media minimum age scheme, clarifying how platforms can implement age verification under the Online Safety Act while complying with privacy laws.
The Attorney General released the national identity proofing guidelines. The framework establishes standardised procedures for verifying user identities across digital services and supports the implementation of age verification requirements.
To protect children’s data, governments impose both restrictions on data processing and safeguards during data processing.
Governments have rarely restricted the processing of minors’ data:
The European Union implemented the Regulation on the transparency and targeting of political advertising, which prohibits using minors’ data for such purposes.
Brazil’s President signed a law on online protection for children and adolescents, which prohibits targeting children and adolescents with commercial advertising using profiling, emotional analysis, augmented reality, extended reality, or virtual reality.
Most governments focused on parental approvals and consent before processing minors’ data:
The abovementioned Brazilian law on online protection for children and adolescents requires platforms to obtain parental consent to collect or process children’s personal data.
Nigeria implemented the directive under the Data Protection Act, which requires service providers offering services to children to obtain parental or guardian consent to process their data. It specifies that consent shall not be sought or accepted in contexts that could promote hate, violence, child rights violations, or criminal activity.
Canada’s Privacy Commissioner, alongside provincial privacy commissioners, found TikTok in breach of privacy laws for collecting children’s personal information without obtaining parental consent and using it for targeted advertising. In response to the findings and recommendations, TikTok agreed to strengthen its privacy measures and, during the investigation, stopped allowing advertisers to target users under 18.
Italy implemented the national AI law requiring parental consent for minors under 14 to access AI technologies or for their personal data to be processed in connection with such use.
At the federal level in the United States, the Children Harmed by AI Technology Act was introduced in the Senate to require AI chatbot providers to obtain parental consent for minors to open accounts. The Department of Justice and Federal Trade Commission (FTC) filed a lawsuit against Iconic Hearts for violating the Children’s Online Privacy Protection Rule by collecting children’s data without verifiable parental consent. The FTC also reached a USD 10 million settlement with Disney for collecting children’s data without parental consent. In addition, a civil action was filed against Google alleging the collection of children's personal data without parental consent.
At the subnational level, the Court of Appeals upheld most provisions of California's Protecting Our Kids from Social Media Addiction Act. The Act requires age verification mechanisms and prohibits platforms from providing personalised algorithmic feeds to minors without parental consent. The California Governor also signed a bill to require app stores to verify users’ age and obtain parental consent before children under 16 can download apps. Colorado and Montana implemented laws prohibiting the sale and processing of minors' personal data for targeted advertising without obtaining parental consent.
Other governments focused on safeguards while children’s data is processed:
Austria’s Data Protection Authority found Microsoft in breach of the General Data Protection Regulation for tracking students through Microsoft 365 Education without the student’s valid consent, ordering the company to grant full data access and clear explanations to the complainant.
China’s National Cybersecurity Standardisation Technical Committee closed the consultation on a standard distinguishing between three levels of data protection in services accessible to minors: basic, enhanced interactivity, and age-appropriate optimisation. The National Cybersecurity Standardisation Technical Committee approved 28 national standards projects, including guidelines for AI applications accessible to minors and technical measures to protect children’s data across sectors.
The European Data Protection Board opened a consultation on guidelines on the interplay between the Digital Services Act and the General Data Protection Regulation, clarifying the legal basis for the processing of minors’ data. It also addresses recommender systems and systemic risk assessments concerning children's data processing.
The UK Information Commissioner’s Office consulted on draft guidance under the Data (Use and Access) Act, requiring online service providers to preserve children’s data post-mortem with safeguards.
Texas implemented a bill mandating data brokers to disclose their status and inform parents of minors’ rights and data protection safeguards.
Online content rules comprise content moderation requirements, related obligations, such as transparency requirements, and user speech rights, providing redress against content moderation decisions. Content moderation requirements were most frequent, including rules regarding the protection of children, the moderation of specific content types, and general moderation regimes.
Moderation requirements addressing the protection of minors spanned across continents:
Brazil’s President signed the law on online protection for children and adolescents, effective March 2026. It includes preventive measures against abuse, exploitation, gambling, predatory advertising, and other online harms. The Ministry of Justice established age ratings for several audiovisual works to protect minors from inappropriate content. Other introduced bills aim to require platforms to implement stronger protective measures against harmful content for minors, and mandate platforms to block content that promotes dangerous challenges or risks to minors’ health or safety. Additionally, Meta received a notice on child safety protection violations on Instagram and Facebook through AI chatbots.
The European Commission consulted on the action plan against cyberbullying, and the European Board for Digital Services announced coordinated action to ensure smaller platforms comply with the Digital Services Act's minor protection obligations.
The Nigerian Communications Commission is consulting on amendments to the Internet Code of Practice. It focuses on updating provisions related to child online protection to prevent exploitation and abuse.
The United States Federal Trade Commission requested information from Alphabet, Character Technologies, Instagram, Meta Platforms, OpenAI, Snap, and X.AI on their measures to assess and mitigate risks to minors, including systems to detect and prevent harmful outputs, such as sexually explicit content. Additionally, 44 state Attorneys General sent letters to AI companies requesting details on policies to protect children from sexually inappropriate interactions. The Texas Attorney General also announced the expansion of the investigation into Discord under the Securing Children Online Through Parental Empowerment (SCOPE) Act, amid concerns about addiction, sexual exploitation, and extremist content affecting minors.
Various governments regulated the moderation of child sexual abuse and exploitation material:
Australia’s eSafety Commissioner launched an investigation into an unnamed company offering AI “nudify” services, alleging that it failed to prevent the creation of synthetic child sexual abuse material in violation of the industry standard under the Online Safety Act.
In Brazil, multiple bills on the matter were introduced in the Chamber of Deputies. The first bill establishes technical standards for detecting child sexual abuse material and improving transparency mechanisms. The second criminalises the adultisation, sexualisation, exposure, or exploitation of children’s images, voices, or representations. A third proposal prohibits the creation or dissemination of content depicting minors as adults in sexual or erotic contexts.
The United Kingdom's Secretary of State repealed two regulations that would have required user-to-user services, under the Online Safety Act, to report child sexual exploitation and abuse material to the National Crime Agency. The Office of Communications (Ofcom) advanced investigations into compliance with obligations to prevent access to or sharing of child sexual abuse material. Proceedings were launched against Im.ge, Nippybox, Yolobit, and DStorage, while cases involving Krakenfiles, Nippydrive, Nippyshare, Nippyspace, and Wojtek/Gofile were closed.
At the federal level, the United States Federal Trade Commission and Utah reached a USD 15 million settlement with Aylo/Pornhub for allegedly claiming zero tolerance for child sexual abuse while failing to remove such material. Meanwhile, the Senate Judiciary Committee expanded its investigation into Meta following testimonies that chatbots encouraged self-harm and exposed children to sexual abuse material. At the subnational level, Montana enacted a law prohibiting entities that distribute or store visual content from allowing access to child sexual abuse material.
The Criminal Law (Miscellaneous Amendments) Bill was introduced in the Parliament of Singapore. The Bill expands the definition of “intimate image” to cover AI-generated material, criminalises non-consensual image production, and clarifies that computer-generated child abuse material is prohibited.
Measures regarding content relevant to political discourse, including news, were also widespread:
The Cyberspace Administration of China (CAC) took action against over 1,200 accounts for impersonating news organisations and spreading misleading content. CAC also released a second batch of cases addressing illegal and harmful content that undermines the online business environment and political discourse.
The European Union’s Regulation on the transparency and targeting of political advertising entered into force. It requires platforms to label paid political ads, provide a mechanism for users to flag non-compliant content, and ban political ads from third-country sponsors three months before elections or referendums. The European Commission also adopted guidelines to support its implementation.
Singapore’s Protection from Online Falsehoods and Manipulation Act Office ordered Meta and X to issue corrections for false claims about the Infocomm Media Development Authority’s licensing rules, requiring them to inform users and link to the official correction.
The United States House Oversight and Government Reform Committee announced an investigation into Wikipedia over alleged organised efforts to inject bias into its entries.
Other content moderation requirements covered several types of content:
Australia’s eSafety Commissioner ordered X and Meta to remove violent footage of recent US killings, classified as refused classification, allowing geo-blocking for compliance while excluding news or political content.
The Cyberspace Administration of China issued findings and orders against Xiaohongshu, UC, and Toutiao for failing to manage user-generated content responsibly, citing issues such as the prominence of trivial or sensitive topics and online violence. The CAC also launched nationwide campaigns, including initiatives addressing content related to veterans and a campaign to rectify the problems of malicious instigation of negative emotions.
The Ministry of Internal Affairs and Communications of Japan designated Pinterest, CyberAgent, Shonan Seibu Home, and Dwango as large-scale telecommunications service providers, imposing content moderation obligations under the Information Distribution Platform Regulation Act. The Act, in force since April 2025, demands both content moderation and transparency from designated platforms.
A bill to amend the Network Act was introduced to the National Assembly of the Republic of Korea to establish procedures for removing online content that violates privacy or is defamatory, including user notifications and dispute resolution options.
France implemented a decree requiring user-generated content platforms to display warnings on pornographic content simulating rape or incest, in line with the Law on Confidence in the Digital Economy. French authorities initiated legal actions against the streaming platform Kick following the death of a French user during a livestream. The Ministry for Artificial Intelligence and Digital Affairs filed a lawsuit alleging that Kick failed to prevent the broadcast of harmful content. Simultaneously, the Paris Prosecutor's Office opened an investigation under the Penal Code to examine whether Kick knowingly disseminated videos that deliberately attacked personal integrity.
The United Kingdom’s Department for Science, Innovation, and Technology announced that the Online Safety Act will be amended to designate content encouraging or assisting serious self-harm as a priority offence. Ofcom launched a consultation on super-complaints and fined 4chan GBP 20,000 for failing to comply with statutory information requests related to its obligation to assess the risks of users encountering illegal content on its platform.
At the subnational level, Montana implemented a law to criminalise threats involving real or digitally fabricated sexually explicit images on user-generated content platforms. Also, State Attorneys General issued letters to technology companies emphasising their responsibility to prevent the spread of non-consensual intimate imagery.
Russia implemented a law banning advertising on platforms deemed “undesirable,” including Facebook and Instagram, and restricting the use of VPNs to bypass access blocks. The amended code of administrative offences also prohibits distributing VPNs, creating or searching for extremist content, and advertising bypass tools.
Obligations related to content moderation focused on liability and transparency:
A bill was introduced in Brazil’s Chamber of Deputies to require social media platforms to moderate verified disinformation, enable user reporting, and cooperate with fact-checkers. Separately, the Court of Justice upheld a ruling requiring Obvious Software and Services to pay damages after misattributed complaints harmed a company. The Court found that the platform’s repeated omissions constituted a systemic failure, removing its liability exemption under the Brazilian Civil Rights Framework for the Internet.
The Cyberspace Administration (CAC) implemented rules on methods for identifying AI-generated synthetic content requiring both explicit and implicit content identifiers. To support the implementation, the standard issued by the National Information Security Standardisation Technical Committee (TC260) establishes uniform marking requirements. The TC260 also adopted guidelines setting methods for implicit metadata identification of synthetic text, images, audio, and video, along with metadata protection and detection frameworks.
The European Commission consulted on draft guidelines for reporting serious AI incidents and transparency guidelines for AI systems that interact with users. The European AI Office also invited expressions on the development of the voluntary Code of Practice on Transparent Generative AI Systems.
The Aligning Incentives for Leadership, Excellence, and Advancement in Development Act was introduced in the United States Senate. It expands the definition of harm to include reputational, financial, psychological, and physical injuries, including derivative harms, ensuring liability can extend to speech-related impacts, such as reputational damage from AI outputs.
The Online Safety (Relief and Accountability) Bill was introduced in the Parliament of Singapore to strengthen protection against online harms. Platforms must take reasonable measures to address specified online harms upon receiving notice, with larger platforms subject to additional requirements, such as shorter timelines for responding to user reports.
Finally, user speech rights were addressed in the United States. The Federal Trade Commission (FTC) sent a letter to Alphabet regarding Gmail spam filtering practices regarding Republican sources, raising concerns about user speech and compliance with the prohibition on unfair practices. The FTC also issued letters to other technology companies, noting their responsibility to maintain consumer privacy and data security, including when facing requests from foreign governments to modify content or security measures.
We distinguish between four types of consumer protection rules: Age-based safeguards, fair marketing and advertising obligations, user rights, and quality of service requirements.
Age-based safeguards were rare but covered both the design of services and parental controls:
The President of Brazil signed the law on online protection for children and adolescents, requiring digital products and services for or accessible to children to integrate safety and well-being measures in their design.
China’s Cyberspace Administration proposed criteria to identify online platforms with many minor users and significant influence, requiring them to provide additional protections, such as usage limits. Meanwhile, the Ministry of Industry and Information Technology concluded its consultation on national safety standards for children’s smartwatches, covering content filtering, location privacy, and communication restrictions to improve child safety.
The Netherlands Authority for Consumers and Markets opened an investigation into Snapchat over alleged violations of the DSA concerning illegal vape trade to minors.
The Children Harmed by AI Technology Act was introduced in the United States Senate. It requires AI chatbot providers to enable linking minors’ accounts to a verified parental account and notify parents of suicidal ideation. The Senate Judiciary Subcommittee on Crime and Counterterrorism also expanded investigations into Meta, Snap, OpenAI, Google, and Character.AI over child safety risks from AI chatbots' engagement-driven design. At the subnational level, the New York Attorney General’s Office opened the consultation on the rules implementing the Stop Addictive Feeds Exploitation for Kids Act. Additionally, the Minnesota Attorney General filed a complaint against TikTok over alleged coercive design features, child safety violations, and deceptive practices.
Regarding fair marketing and advertising practices, governments addressed misleading information, the disclosure of advertisements, the prevention of deceptive interfaces, as well as hidden fees and fake reviews. The majority of developments focused on misleading information:
The Australian Competition and Consumer Commission filed a lawsuit against JustAnswer over false, misleading, and deceptive conduct concerning its online advice service.
The State Administration for Market Regulation of China (SAMR) opened an investigation into an e-commerce subsidiary of a short-video platform following reports of false advertising and the sale of counterfeit goods. The SAMR also released a report on recurring violations in the live e-commerce sector, and imposed penalties ranging from CNY 61,000 to CNY 450,000 over false advertising, fabricated testimonials, and misleading claims.
The Hungarian Competition Authority opened an investigation into Duolingo over potentially misleading claims about its learning method on its platform intermediary service.
At the federal level, in the United States, the Western District Court of Washington approved a USD 2.5 billion FTC settlement against Amazon for deceptive subscription practices. The FTC also issued proposed orders against Click Profit for misrepresenting earnings from AI-driven online retail and barred the company from misleading marketing on e-commerce platforms, while ruling against Workado over unsubstantiated advertising claims related to AI content detection tools. Additionally, the FTC filed a lawsuit against Air AI for deceptive claims on business growth, earnings, and refund guarantees, targeting small businesses using AI systems. At the subnational level, the Texas Attorney General opened an investigation into Meta and Character.AI for allegedly presenting AI chatbots as licensed mental health providers.
Another point of emphasis was the disclosure of advertisements, especially by influencers:
The Chamber of Deputies of Brazil passed a bill requiring digital influencers to label sponsored or AI-edited content and ensure accuracy under consumer protection laws.
The Monetary Authority of Singapore and Advertising Standards Authority of Singapore adopted rules and guidelines for responsible financial content creation by influencers and proper advertising disclosures.
Rules regarding deceptive interface designs, or dark patterns, were less frequent:
The European Commission consulted on the Digital Fairness Act, including rules to address dark patterns and establish fair marketing requirements for digital platforms.
The Amsterdam District Court in the Netherlands ruled that Meta’s automatic reversion to profiling-based recommendations on Facebook and Instagram constitutes an illegal "dark pattern" under the Digital Services Act. The court ordered Meta Ireland to make users' choices for a non-profiling recommendation system "persistent," meaning it must be retained until the user actively changes it.
The South Korea Fair Trade Commission concluded consultations on draft amendments to e-commerce guidelines, setting standards and recommendations for regulating dark patterns. It also identified dark pattern practices on 36 online platforms and ordered corrections for hidden renewals, misleading layouts, and partial price disclosures.
Additionally, some governments focused their fair marketing rules on hidden fees and fake reviews:
The European Commission endorsed a code of conduct for online reviews and ratings of tourism accommodation to improve transparency, reliability, and consumer trust in platform intermediaries.
The Office of Competition and Consumer Protection in Poland opened an investigation into Netflix over allegedly treating users’ silence as consent to fee increases, violating consumer protection rules.
The Federal Trade Commission (FTC), together with several Attorneys General, filed a lawsuit against Live Nation and Ticketmaster over hidden fees and deceptive ticketing practices. Additionally, the Attorney General's office regulation requiring businesses to clearly disclose total prices and optional charges entered into force.
In terms of consumer rights, governments focused on information and cancellation rights:
The European Commission opened a consultation on the evaluation of the Geo-Blocking Regulation to assess its implementation, enforcement, and impact on user rights across user-generated content and e-commerce platforms.
A bill to prohibit user-generated content platforms from conditioning access to content on creating accounts, profiles, or providing personal data was introduced to the Chamber of Deputies of Brazil.
Regarding quality assurance, several governments addressed quality standards and the removal of certain unsafe products:
The State Administration for Market Regulation (SAMR) in China concluded consultations on draft measures on food safety responsibilities for live e-commerce, setting quality assurance requirements for platforms. Additionally, SAMR launched a pilot project on quality and safety coding verification for online products, applying to major e-commerce platforms and establishing product safety verification standards.
The European Commission requested information from Apple, Booking, Google, and Microsoft on the detection and mitigation of financial scam risks under the Digital Services Act. The requests focus on the platforms’ methods for identifying fraudulent content, conducting risk assessments, and protecting users from financial scams. The Commission also opened a consultation on the EU Delivery Act, which modernises postal regulations and sets quality of service requirements for e-commerce platforms. Meanwhile, the Council of the European Union adopted the Regulation on the safety of toys, requiring products to carry a Digital Product Passport with essential safety information.
Lastly, governments focused on the identification of sellers:
The Korea Fair Trade Commission issued a ruling against AliExpress and related entities, imposing corrective orders and KRW 1 million fines each for violations of seller identification requirements under the Act on Consumer Protection in Electronic Commerce.
The Federal Trade Commission reached a USD 2 million settlement with Temu for failing to provide a clear reporting system for suspicious activity and omitting required seller identification information, violating the INFORM Consumers Act of 2023.