A New Chapter in Content Moderation: Unpacking the UK Online Safety Bill

On September 19, the British Parliament passed its sweeping Online Safety Bill with the stated intent to make the United Kingdom the “safest place in the world to be online.” After it receives royal assent in the coming weeks, it will place substantial burdens on all online platforms that host user-generated content in the United Kingdom to remove and prevent illegal content, including posts relating to terrorism, child exploitation, hate crimes, or fraud. The bill puts particular emphasis on protecting children online, with additional requirements to prevent children from accessing “harmful and age-inappropriate” content. Platforms found to be out of compliance could be fined up to £18 million or 10 percent of global annual revenue (whichever is larger) by Britain’s Office of Communications (Ofcom).

The Online Safety Bill is not without controversy. While children’s advocacy groups such as the National Society for the Prevention of Cruelty to Children hailed the bill’s passage as “a momentous day for children,” there has been strong pushback from civil liberties groups as well as tech companies. They protest that the bill’s provisions on encryption and content moderation will limit privacy and freedom of expression online—and some may not even be enforceable. As other major democracies such as the United States and the European Union consider their own approaches to harmful online content, the world will closely watch implementation in the United Kingdom.

Q1: How does the Online Safety Bill address child safety?

A1: The bill differentiates between two types of content subject to age-specific protections: “primary priority” and “priority” content that is harmful to children. It requires digital platforms to completely prohibit children from accessing primary priority content including pornography, promotion of self-harm, eating disorders, and legal suicide. Meanwhile, companies must ensure that priority content, such as harassment, health and vaccine misinformation, or violence, is “age appropriate” for children. The bill also requires digital platforms to maintain a complaint system for parents to report violations of these provisions that they encounter online.

Many websites and apps will need to verify the ages of their users in order to comply with these provisions, which raises questions over both digital privacy and technical feasibility. Most current identification techniques—such as credit cards, government-issued identification, analysis of online search history, and facial recognition—require companies to process additional personal information from all users, not only those under the age of 18. In turn, higher degrees of data collection could intensify the risks of cybersecurity attacks, which is an ongoing challenge in the United Kingdom—approximately one-third of UK businesses identified threats in the past year, and Ofcom itself suffered a data breach in June that compromised sensitive information from hundreds of employees. Many age verification methods are also imperfect; children can easily switch between most physical devices or cards, and automated analysis of facial features or online presence relies on a range of potentially biased or inaccurate assumptions. The bill directs Ofcom to publish further guidance on age verification techniques, but it is not clear how regulators will navigate these trade-offs with data privacy, security, algorithmic bias, and accessibility for both children and adults online.

Q2: Will the bill affect end-to-end encrypted services?

A2: End-to-end encryption (E2EE) is the digital communication process which underlies secure messaging services like Signal, WhatsApp, and Apple’s iMessage, preventing third parties and even the platforms themselves from unauthorized access to data. It provides crucial protection against surveillance, particularly for vulnerable groups such as journalists, human rights activists, and diplomats. However, governments have often protested that E2EE can also protect criminals and terrorists from the reach of law enforcement.

The Electronic Frontier Foundation (EFF) summed up the views of many privacy groups and technology companies when it declared on September 8 that “the [Online Safety Bill] is incompatible with end-to-end encryption.” This alleged incompatibility stems from the provision granting Ofcom the power to give notice to companies to scan their data to proactively identify and prevent content that promotes terrorism or child exploitation. There is no exception for E2EE within the text of the bill. Encrypted services have argued that it is technically unfeasible to build a “back door” into their networks without undermining the privacy that their users rely on. They warn that consumers will be exposed to unwarranted surveillance by both overzealous state actors and malicious hackers. Some, including Signal and WhatsApp, have said they would end operations in the United Kingdom if forced to compromise their security regimes. Amid this controversy, Meta has continued its process of implementing E2EE in its Facebook Messenger and Instagram direct messages over the objections from the Home Office.

In the face of this strong criticism, the UK government has somewhat walked back its plans to scan E2EE data. On September 6, Stephen Parkinson, a member of the House of Lords, stated that Ofcom cannot compel scanning unless “appropriate technology” exists, seemingly acknowledging that no adequately privacy-protective E2EE scanning software is currently ready for deployment. However, Parkinson also pointed to language in the bill which gives Ofcom the right to compel companies to make “best endeavours” to develop such technology. Supporters cite findings from the government’s Safety Tech Challenge Fund which concluded that client-side scanning could be used detect illicit content in E2EE while still maintaining user privacy (these findings have been disputed by academics and a government-commissioned team that evaluated the project). Secretary of State for Science, Innovation and Technology Michelle Donelan asserted that the government’s ability to compel the development of scanning technology would only be used as a “last resort.” Despite these verbal assurances from government officials, the Parliament did not formally remove Ofcom’s authority to compel E2EE scanning from the final text of the statute, which continues to worry privacy advocates and tech companies.

Q3: What considerations does the Online Safety Bill give to free speech and expression?

A3: The Online Safety Bill is one of the most sweeping sets of content moderation rules that has emerged from a major democratic government, which has raised debates over its potential overreach on free speech and expression in the United Kingdom. Because the bill contains relatively serious penalties for technology companies—including criminal liability for executives who fail to comply with child safety rules or withhold information during investigations—it could create incentives for them to err on the side of over-moderating online environments. Due to the large quantity of content that users upload online, companies often rely on automated systems to detect keywords or images that could indicate harm. However, it may be hard to imply concepts such as irony, satire, cultural nuance, and political dissent, which could be mistaken for hate speech or other illegal or harmful material and needlessly removed. The bill also raises questions about the subjectivity of defining “harm” to children, the merits of paternalistic policies for young people, and the creation of separate standards for young people who are more commonly exposed to adult content offline through peers rather than searching the internet on their own.

Some privacy researchers warn that wall-to-wall content monitoring also threatens to curb anonymous online speech—which, in turn, could potentially cause a chilling effect that prevents users from speaking up online. Because the bill could lead companies to verify the identities of all their users—and even scan their private communications—some individuals may be less likely to engage in political activism, religious activities, or other sensitive communications online due to fear of surveillance. This may disproportionately impact communities who have historically experienced discrimination, such as LGBTQ+ individuals, who face outsized risks in the event of a privacy violation if they are outed. Depending on how the bill’s provisions will be interpreted by tech companies and regulators, it could also cut off access to information or digital communications that would be otherwise helpful for children (and even potentially restrict adults who lack government identification, digital literacy skills, or other resources necessary for age verification). The question of how the bill might affect privacy and freedom of speech will remain an ongoing issue in the wake of its implementation.

Q4: How are other governments addressing content moderation and children’s digital safety?

A4: Outside the United Kingdom, numerous governments have introduced frameworks to address children’s digital safety. In 2021, Australia passed the Online Safety Act, which mandates that qualifying digital platforms take “reasonable steps” to delete material that either violates existing law or is considered age-inappropriate for children. In August 2023, China proposed draft rules that would limit the amount of time that children spend on the internet, raising concerns about curbing access to information online. Meanwhile, the U.S. Congress proposed several high-profile bills like the Kids Online Safety Act and EARN IT Act to regulate online content pertaining to minors, and state legislatures introduced at least 144 measures in 2023 alone—many of which have sparked debate over their interpretations of “harm” and potential for censorship.

In addition, many technology platforms are navigating the rollout of the European Union’s new Digital Services Act (DSA), which aims to empower users and increase corporate responsibility across the digital ecosystem. Like the Online Safety Bill, the DSA establishes a basic framework that requires digital companies to conduct risk assessments, publish transparency reports, and maintain user controls, with tiered provisions based on their size or perceived dangers. While the DSA contains some age-specific provisions, including a ban on behavioral advertisements toward minors, it limits liability to when platforms are “aware [of a child’s age] with reasonable certainty,” sidestepping some of the concerns over user monitoring or censorship within the United Kingdom. Still, end-to-end encryption is a subject of ongoing debate within the European Union too; the DSA clarifies that it does not prohibit “encrypted transmissions or any other system that makes the identification of the user impossible,” but legislators excluded specific protections for anonymous online speech in the final version of the legislation.

As more governments unilaterally enact laws related to content moderation or encryption, digital platforms will likely find themselves navigating an increasingly complex regulatory patchwork—and some may apply extraterritorially or even conflict with each other. Recognizing that the global nature of the internet requires a more cohesive international effort, some governments have already begun to engage in multilateral dialogues. In April 2022, approximately 70 countries signed the Declaration for the Future of the Internet, a set of voluntary principles that included safeguarding young people in a more digitized society. Moreover, G7 member states agreed upon internet safety principles in April 2021 that urged companies to address “both illegal and harmful” content toward children and supported further civil society and academic engagement on this topic. Shortly after, the Organization for Economic Cooperation and Development amended its Recommendation on Children in the Digital Environment, which called for a balance between mitigating online harms to children and maintaining the rights to free speech and online access for all. Going further, any harmonization or interoperability between legal standards could benefit both individuals and platforms—not only to mitigate the spread of harmful content online, but also to promote consistent standards for information access, free expression, and privacy across borders.

Caitlin Chin-Rothmann is a fellow with the Strategic Technologies Program at the Center for Strategic and International Studies (CSIS) in Washington, D.C. Taylar Rajic is a research associate with the CSIS Strategic Technologies Program. Evan Brown is a research intern with the CSIS Strategic Technologies Program.

Image
Caitlin Chin

Caitlin Chin-Rothmann

Former Fellow, Strategic Technologies Program
Image
Taylar Rajic
Associate Fellow, Strategic Technologies Program
Image
Evan Brown
Program Coordinator and Research Assistant, Economics Program and Scholl Chair in International Business