The United Kingdom’s Online Safety Bill Exposes a Disinformation Divide
A major effort to curb misinformation and disinformation in the United Kingdom hit a recent snag: the turmoil surrounding Prime Minister Boris Johnson’s resignation and upcoming Conservative Party leadership election. Prior to Johnson’s announcement, the House of Commons had been expected to advance the proposed Online Safety Bill in mid-July. But due to new schedule constraints, the legislation has been tabled at least through the fall.
Some critics have seized the postponement as an opportunity to call for a “total rethink” of the bill, and the viability of any current or future content moderation proposal will likely depend on the standpoint of the next prime minister. Misinformation and disinformation can be harmful, but it is not always illegal in the United Kingdom—and the two frontrunners to succeed Johnson, Rishi Sunak and Liz Truss, have both stated that any government regulation to curb harmful content must also protect the freedom of speech. To illustrate the challenging trade-offs related to misinformation and disinformation, below is a summary of three significant points of controversy the Online Safety Bill faces, as well as some considerations from concurrent efforts to tackle false content across other governments.
Challenge 1: Disinformation has harmful consequences, but regulating legal content can create tension with free expression.
The Online Safety Bill would generally establish requirements for search engines and “user-to-user” websites and apps to curb the circulation of illegal material online. But more controversially, it would also compel larger platforms to create and enforce terms of service agreements to address certain categories of “legal but harmful” content that the Secretary of State for Digital, Culture, Media and Sport (DCMS), in consultation with Ofcom and Parliament, would determine through secondary legislation. DCMS has described such categories as “likely to include disinformation,” and a recently proposed amendment would add foreign state-sponsored disinformation to a list of “priority offences” that Ofcom would be able to enforce prior to the enactment of secondary legislation.
There is no doubt that online false content, including those sponsored by foreign governments, reads as a growing threat to national security, public health, and election integrity. Over the past decade, the Russian government has reportedly deployed automated bots and paid individuals to spread false or polarizing messages on social media channels, including to influence narratives related to UK elections, Brexit, Covid-19 vaccines, Ukraine, and more. But not all digital disinformation campaigns are sponsored by foreign entities; inaccurate messaging can also be shared by local politicians, celebrities, influencers, and internet users. Both the Conservative and Labour Parties were found to have spread incorrect claims during the 2019 general election, such as during a high-profile incident where the official Twitter account for the Conservative Party changed its handle from @CCHQPress to @factcheckUK and misleadingly claimed to fact-check Labour candidate Jeremy Corbyn.
Despite the recognized harms of spreading online disinformation, Article 10 of the Human Rights Act 1998 states that “everyone has the right to freedom of expression” in the United Kingdom. While this right is not absolute—some types of expression, such as hate speech and defamation, are outlawed in certain contexts—it is currently legal for individuals to express inaccurate or subjective views, opinions, and satire in many cases. As such, critics have contended that the “legal but harmful” standard in the Online Safety Bill could break with a tradition of free expression in the United Kingdom, and even risk creating a different set of rules for the online and offline spaces.
Challenge 2: Misinformation and disinformation can spread on a wide range of services, where online users may have varying expectations for privacy and free expression.
False or misleading information can emerge in a variety of forms, including photos, videos, and encrypted messages, as well as through both public and private channels. To account for such a diversified communications landscape, the Online Safety Bill broadly applies requirements to remove illegal content to all online search engines and companies that allow users to upload content or communicate with other individuals (e.g., social media platforms, messaging services, online forums, interactive games).
However, the Online Safety Bill would only require “Category 1” services—which would be defined in secondary legislation, but generally consist of larger digital platforms—to address future categories of “legal but harmful” content and to “protect content of democratic importance.” There are valid reasons to consider a platform’s size in content moderation standards; those with a larger number of users typically hold more reach and could potentially spread harmful messaging to a wider audience. For example, an analysis by ProPublica and the Washington Post found over 650,000 posts on Facebook public groups between November 3, 2020 and January 6, 2021 that cast doubt on the legitimacy of the 2020 U.S. presidential election results, which were likely seen by tens of millions of Facebook users. But as Shadow Culture Minister Alex Davies-Jones has pointed out, size is not the only factor that determines risk; for instance, prior to the January 6 insurrection, far-right groups used smaller apps and websites such as Parler, 4chan, and Gab to mobilize rioters through false or extremist content.
That is why—in addition to a platform’s size or number of users—the Online Safety Bill will also need to evaluate factors like the context, audience, and potential risk of harm of a false claim. It is possible that an inaccurate statement on Twitter or Instagram may have a greater potential for harm when shared through a public account than a private account, message, or channel. In addition, users may have different expectations of privacy based on their type of communication; for example, a person using a temporary messaging service (e.g., Snapchat) or an end-to-end encrypted platform (e.g., Signal or WhatsApp) might not expect a government entity or digital platform operator to read their messages to moderate for disinformation.
Challenge 3: There is no universal definition of false or harmful content, and government entities can face conflicts of interest in regulating content that may relate to them.
The Online Safety Bill will also need to consider a challenge that has permeated other governments’ efforts to curb disinformation: accuracy can be subjective, and it is difficult to ensure neutrality or independence in any enforcement and oversight of misinformation policies, especially in the context of political speech. These questions have been raised in the context of Singapore’s Protection from Online Falsehoods and Manipulation Act (POFMA), which allows government ministers to independently issue either a “disabling direction” or “correction direction” to require digital platforms to remove or issue a correction for specific online statements that they deem to be false. Since individual ministers have invoked POFMA dozens of times since its enactment in 2019, including in the context of political criticism and Covid-19 misinformation, their efforts to prevent false messages could also potentially lead to censorship of content that is subjective, vague, or directly or indirectly pertains to them. In the United States, some government officials have recently attempted to control how digital platforms host content directly related to their political party or their political viewpoints; for example, Texas attorney general Ken Paxton initiated an investigation against Twitter after it banned Donald Trump and stated an intent to “ensure that conservative voices are protected from [Big Tech],” despite First Amendment protections for commercial speech.
The U.S. Department of Homeland Security’s Disinformation Governance Board, which was announced in April, has demonstrated how government entities cannot effectively tackle disinformation if they lack public legitimacy or acceptance. Although Secretary Alejandro Mayorkas stated that the Disinformation Governance Board was intended to be an internal working group to explore best practices without “any operational authority or capacity” to remove online content, the agency suspended it after just three weeks following online criticism and an initial lack of public clarity around the board’s mandate. For example, Protect Democracy, Electronic Frontier Foundation, and the Knight First Amendment Institute at Columbia University wrote that “any government board ostensibly tasked with monitoring and ‘govern[ing]’ disinformation is a frightening prospect; in the wrong hands, such a board would be a potent tool for government censorship and retaliation.”
The United Kingdom’s Online Safety Bill attempts to avoid conflicts such as those in the United States and Singapore by putting the responsibility on digital platforms—not a government enforcer—to label or take down specific pieces of content. It calls for secondary legislation to stipulate categories of “legal but harmful” speech but not any specific user-generated posts; Category 1 platforms could create their own terms of service agreements in compliance with these categories and would only need to moderate legal content according to their policies. In theory, terms of service create a barrier from the government’s control over user speech—but in practice, it is not yet clear how Ofcom would enforce them. Since content moderation is a case-by-case decision among very different types of expression and contexts, it will likely be difficult for Ofcom to objectively determine whether companies apply their terms of service consistently—and when it comes to misinformation and disinformation, both government enforcers and digital platforms would inevitably face at least some subjective scenarios in determining what communications are false.
Conclusion
In addition to the United Kingdom, Singapore, and United States, many other governments have proposed policy frameworks and actions to suppress online disinformation in recent years. For example, the European Union’s upcoming Digital Services Act will require larger platforms to assess the “systemic” and “societal or economic” risks of their services, in conjunction with industry self-regulatory standards set out in the Strengthened Code of Practice on Disinformation. The Australian government initiated an educational campaign during its 2019 election cycle to raise public awareness of online disinformation. If enacted, the proposed Brazilian Law on Freedom, Responsibility and Transparency on the Internet would require social media platforms to maintain records of certain mass communications chains and could prevent individuals from creating pseudonymized or automated social media accounts.
These preliminary strategies from governments and private companies could provide valuable insights into the possible outcomes of content moderation. But internet-based communications often have a global reach, and the unilateral authorization or restriction of online speech in one country will inherently impact others. In the long term, global collaboration to harmonize content moderation standards is likely necessary to both reduce misinformation and disinformation and protect free expression.
Caitlin Chin is a fellow with the Strategic Technologies Program at the Center for Strategic and International Studies in Washington, D.C.
Commentary is produced by the Center for Strategic and International Studies (CSIS), a private, tax-exempt institution focusing on international public policy issues. Its research is nonpartisan and nonproprietary. CSIS does not take specific policy positions. Accordingly, all views, positions, and conclusions expressed in this publication should be understood to be solely those of the author(s).
© 2022 by the Center for Strategic and International Studies. All rights reserved.