A Guide for Governments
In April 2025, the US government will outline the foreign digital policies it deems to discriminate against US companies and how it plans to counter them. Online content regulation is on the radar. This piece helps foreign governments understand the fundamentals and the cause of tensions.
Note: This analysis is part of our series on geopolitical tensions in digital policy. The series starts by dissecting a recent US memorandum that scrutinises different types of foreign digital policy. Topical pieces, including this, then distill the global state of affairs and explain the cause of geopolitical tensions in one type of digital policy.
The regulation of online content is at the forefront of geopolitical tensions in digital policy. Several high-profile figures of the US administration have publicly opposed foreign regulation of online content, including Vice President Vance, Federal Communications Commission Chairman Barr, and House Judiciary Committee Chairman Jordan. Their focus has been the EU Digital Services Act. The recent US memorandum expands the scope of scrutiny to the United Kingdom (UK), presumably its Online Safety Act. The stated concern is that these policies require or incentivise US companies to undermine freedom of speech. Below, we analyse how governments are regulating online content and how this contributes to geopolitical tensions, based on the Digital Policy Alert dataset.
Governments impose both obligations to moderate content and restrictions on content moderation, to protect users’ freedom of speech. The rise of user-generated content platforms has facilitated the dissemination of illegal content. At first, governments reacted by requiring platforms to moderate certain content and suspend certain accounts. Over time, concerns grew that platforms exerted excessive influence on online discourse – a typical allegation being “anti-conservative bias.” These concerns escalated after platforms, including Meta, Twitter (now X) and YouTube, suspended Donald Trump’s accounts in the aftermath of 6 January 2021. Subsequently, some governments also started restricting how platforms can moderate content.
Content moderation obligations are widespread across the world, totalling 778 developments at the national and EU level, as of 7 April 2025.
The most active jurisdictions were the US (95), Russia (74), China (64), the EU (61), and the UK (58).
Most of these developments were binding laws and orders (419) or enforcement cases (252). A minority comprised non-binding outlines (88) and inquiries (24).
A majority of the developments were adopted or in force (463), while a minority were under deliberation (273), and fewer still were rejected or revoked (42).
The developments primarily targeted user-generated content platforms (488), followed by other platform intermediaries (120), messaging service providers (67), search service providers (67), and online advertising providers (66). In addition, 99 developments were cross-cutting, meaning they applied to the digital economy as a whole.
Restrictions on content moderation are less prevalent, totalling 57 developments, as of 7 April 2025:
The most active jurisdictions were the US (28), Russia (18), and Brazil (2).
These developments were mainly enforcement cases (41) or binding laws and orders (16).
A majority of the developments were currently under consideration (37), while a minority were adopted or in force (13), and fewer still were rejected or revoked (7).
These developments mostly applied to user-generated content platforms (37), with few exceptions that focused on AI providers (13).
To explain how governments impose both obligations and restrictions on content moderation, we focus on the EU and the UK, the explicit targets of the memorandum.
The EU Digital Services Act (DSA) establishes obligations for different “online intermediaries:”
Regarding content moderation obligations, the DSA explicitly states that intermediaries do not have an obligation to actively monitor content. When government authorities issue an order to act against illegal content, intermediaries must react and explain the effect given to the order.
“Hosting providers” must further implement a “notice and action” mechanism regarding illegal content. When illegal content is notified, hosting providers are presumed to have knowledge and must remove it to avoid liability.
“Online platforms” must establish systems to prioritise notices by “trusted flaggers” and further suspend services for users that frequently provide manifestly illegal content, after prior warning and for a limited time period.
“Very large” online platforms and search engines must further assess risks, including the dissemination of illegal content, and consider the adaptation of their content moderation systems as part of their risk mitigation measures.
Regarding restrictions on content moderation, the DSA requires intermediaries to regard fundamental rights, including freedom of speech, and act diligently, objectively, and proportionately when moderating content.
Hosting providers must further provide a “statement of reasons” to users affected by the removal or demotion of content, as well as the suspension or termination of the service, the user account, or monetisation.
Online platforms must additionally implement a two-step redress mechanism. First, an internal complaint-handling system to enable users to challenge content moderation decisions, for free and within six months. Second, a certified out-of-court dispute settlement body to resolve remaining disputes.
“Very large” online platforms and search engines must, as part of their risk assessment, consider whether their services negatively impact fundamental rights, including freedom of speech, as well as civic discourse and electoral processes.
For both obligations and restrictions, the DSA demands transparency. All intermediaries must include information on their content moderation practices in yearly transparency reports and in their terms and conditions. In addition, hosting providers must inform on the notice and action mechanism, online platforms must inform on the redress mechanisms, and very large online platforms and search engines must regularly publish transparency reports and share audit results with authorities.
The UK Online Safety Act (OSA) aims to protect both children and adults online by establishing obligations for user-to-user and search service providers. Additional obligations apply to services likely to be accessed by children and “categorised services” that exceed certain thresholds:
Regarding content moderation obligations, the OSA imposes “safety duties” to protect users from illegal content, including measures to prevent or minimise the length of exposure to illegal content. Furthermore, providers must establish a mechanism for users to report illegal content and a complaints system. For services likely to be accessed by children, equivalent duties apply concerning content that is harmful to children. In addition, providers must empower adult users to control their exposure to different kinds of content, conduct an illegal content risk assessment, and specify protections against illegal content in their terms of service.
Regarding restrictions on content moderation, the OSA requires providers to consider freedom of speech when moderating content. Providers must establish a complaints system for users to challenge content moderation and access restrictions, as well as the use of proactive technology for these purposes. Furthermore, providers must “protect” certain content, including content of democratic importance, journalistic content, and content by news publishers.
The OSA also covers transparency, including a yearly reporting mechanism and obligations to include information on content moderation practices in terms of service. Notably, the terms of service must inform users about their right to bring a claim for breach of contract if content is moderated or accounts are suspended in breach of the terms of service.
The US is also considering both content moderation obligations and restrictions:
Regarding content moderation obligations, the US President and First Lady recently expressed their support for the TAKE IT DOWN Act. The bipartisan bill, which passed the Senate in February, would require platforms to remove non-consensual intimate deepfakes under the oversight of the Federal Trade Commission. Critics promptly raised concerns that the Act could be misused to remove lawful speech and censor political opponents. Similar discussions occurred when the previous Congress was debating the Kids Online Safety and Privacy Act, the Children and Teens' Online Privacy Protection Act, the DEFIANCE Act, and the EARN IT Act, among other proposals that were not adopted.
Regarding restrictions on content moderation, an ongoing inquiry by the Federal Trade Commission is scrutinising alleged “censorship” by social media platforms. In addition, the House Judiciary Committee is inquiring whether the Biden administration coerced or colluded with AI companies to censor lawful speech. Furthermore, two state-level laws in Florida and Texas restricting platforms’ content moderation powers were temporarily halted by the US Supreme Court, including for restricting platforms’ freedom of speech.
Until recently, tensions regarding online content regulation were internal, not international. Content regulation is intertwined with public morals, an area in which governments have different traditions and are adamant on their sovereignty. Hence, even fierce internal debates were not transported across borders.
Two reasons can explain why tensions over content regulation have expanded to the international sphere. First, industry pushback against content moderation has grown considerably. Several companies have recently relaxed their content moderation practices, a prominent example being Meta. In parallel, companies increasingly oppose foreign content regulation. In Brazil, strong industry pushback against the Fake News Bill even led to investigations into Google and Telegram. The Federal Supreme Court analysed whether the companies used their platforms to influence public opposition, but closed the investigations without consequences.
Second, the new US administration is supporting this pushback, increasing the pressure on foreign governments. In its first two months, the administration repeatedly emphasised both the importance of free speech and its discontent for foreign regulation of US companies. This emboldened US companies. Meta CEO Zuckerberg, for instance, announced that the company will “work with President Trump to push back on governments around the world.” He named the EU, China, and unspecified countries in Latin America as examples of governments that impose censorship. Notably, he stated that the only way to push back is with “the support of the US government.”
The memorandum instructs the Treasury Secretary, the Commerce Secretary, and the US Trade Representative to scrutinise online content rules in the EU and the UK. In particular, they will investigate whether any act, policy, or practice has the effect of “requiring or incentivizing the use or development of US companies' products or services in ways that undermine freedom of speech and political engagement or otherwise moderate content.” The results will be presented in a report by the US Trade Representative, scheduled for April 2025, which will include recommendations for the US.
Foreign governments preparing for these next steps should consider two perspectives.
The memorandum scrutinises content moderation obligations without acknowledging restrictions on content moderation. The EU and the UK both impose such restrictions – with the explicit aim to protect freedom of speech – which could alleviate tensions. Other governments, whose content moderation obligations may well be scrutinised in the future, should highlight any restrictions included in their regulatory frameworks.
In the US, some content moderation obligations, especially concerning children’s online safety, are gaining support in both parties and industry. The TAKE IT DOWN Act, for instance, could become a rare example of federal legislation on online content. There is a considerable caveat: Many similar proposals have gained momentum in the past without being adopted. Still, the current momentum provides an opportunity for foreign governments to carefully frame their content moderation obligations in line with current US developments.
Note that each development can target one or multiple sectors of the digital economy, or be cross-cutting.