By Eugenia Lostri
Last week, interest in Section 230 of the Communications Decency Act (CDA) peaked after President Trump issued an Executive Order proposing limits on the protections the law provides to social media platforms. The responses to the EO show how large social media companies understand political content moderation and the enforcement of their own terms of use.
Sec. 230 (c) of the CDA states that “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.” Additionally, it protects providers from civil liability for “Good Samaritan” access restriction to material considered to be “obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected.”
Some see Section 230 of the CDA as a protection for freedom of expression online, as it allows for platforms that host third-party content at high volumes, protecting them from being held legally responsible for what their users say and do. It also guards them when making decisions on removing objectionable content.
The 28 May
Executive Order says social media companies engage in “selective censorship calling for them to instead be considered as content creators. Section 2 of the Executive Order on “Protections Against Online Censorship” calls for the FCC to issue clarifying regulations on the interaction between subparagraphs (c)(1) and (c)(2) of Section 230 and the conditions under which removal or restriction of content that is not “taken in good faith” would qualify as editorial conduct. This change would expose platforms to liability. The text mentions several actions by social media companies that it deems examples of censorship and points to Twitter’s recent fact-checking of President Trump’s Tweets, saying “Twitter now selectively decides to place a warning label on certain tweets in a manner that clearly reflects political bias. As has been reported, Twitter seems never to have placed such a label on another politician’s tweet.”
[1]
The
interpretation of Section 230 and its protections by the EO have been subject to criticism. Regardless of the Executive Order—and subsequent calls to “repeal Section 230”, it appears to be highly unlikely that the protections in place will be revoked.
Prior to issuing the Executive Order, President Trump made a series of statements on Twitter regarding election fraud associated with mail-in ballots after California announced the expansion of mail-in voting during the COVID-19 pandemic. Shortly after, Twitter tagged the
tweets in question with links providing context about the issue and information from fact checkers. Twitter cited its
civic integrity policy as justification for the action, explaining that the tweets had the potential to “confuse voters about what they need to do to receive a ballot and participate in the election process.”
This was the first time Twitter flagged a politician’s statements, and the decision was met with outrage from the President, who
considered this an interference from the social media company in the 2020 election and a stifling of free speech. On Friday, President Trump
accused Twitter of allowing misinformation by the Chinese or his political opponents to flourish while simultaneously targeting and censoring “Conservatives & the President of the United States,” He called for
Section 230 of the Communications Decency Act (CDA) to be revoked by Congress.
The latest event in this saga was Twitter’s decision to place a “public interest notice” on a 29 May tweet that
accused protesters of being “thugs” and warned that “when the looting starts, the shooting starts,” referring to the clashes between protesters and police in Minneapolis,. Twitter argued that this Tweet constituted a glorification of violence and a violation of its
Rules. Although Twitter’s decision means the Tweet is hidden from public view, it has not been removed—it remains accessible on the grounds of public interest.
Twitter CEO Jack Dorsey clarified in a tweet of his own that the decision to flag President Trump’s tweets on voting fraud was intended to “connect the dots of conflicting statements and show the information in dispute so people can judge for themselves.” This practice follows the
civic integrity policy at Twitter, which states that the use of the platform “for the purpose of manipulating or interfering in elections” is prohibited. The terms state that there are three categories of behavior and content that are now allowed: 1) misleading information on participation, 2) suppression and intimidation, and 3) false or misleading affiliation.
In contrast, Facebook’s CEO said during an
interview with FoxNews that Facebook “shouldn’t be the arbiter of truth of everything that people say online.” This follows a September 2019 decision that Facebook does not fact-check speech by politicians, including political ads, because they consider that there is sufficient scrutiny for politicians’ statements and it is not the platform’s role “to intervene.” Zuckerberg did comment that violation of Facebook’s policies by anyone on the platform—including content promoting violence, or that could cause imminent physical harm—would result in removal. Language that Twitter deemed was a glorification of violence remains on Facebook. Zuckerberg
explained that the decision to not remove the language meets the need people have to know whether force is going to be deployed, regardless of the “troubling historical reference.” Dozens of Facebook employees
reportedly criticized the decision and staged a “virtual walkout” on Monday.
The events of last week presented an example on how social media companies understand their responsibilities on the matter. Twitter and Facebook have outlined different views on their role concerning fact-checking, content moderation, and how they apply to the general user and politicians. The position that companies such as Facebook or Twitter take on content moderation has far-reaching effects. Achieving the right balance between removing harmful content and infringing on freedom of speech is hard, and different models across the world have encountered obstacles. There is no global solution to this, because the balancing of rights and interests varies from country to country.
In the United States, especially as the 2020 elections approaches, platforms like Facebook and Twitter will likely face this problem more often. If political discourse continues to become more violent and the spread of disinformation plagues the campaign, it remains to be seen how far are both companies willing to enforce their respective policies.
[1] On 28 May Twitter added fact checking tags to March tweets from the Chinese Foreign Ministry spokesman Lijian Zhao, which asserted that COVID-19 had originated in the US.
Eugenia Lostri is a program coordinator and research assistant with the Technology Policy Program at the Center for Strategic and International Studies in Washington, DC.
The Technology Policy Blog is produced by the Technology Policy Program at the Center for Strategic and International Studies (CSIS), a private, tax-exempt institution focusing on international public policy issues. Its research is nonpartisan and nonproprietary. CSIS does not take specific policy positions. Accordingly, all views, positions, and conclusions expressed in this publication should be understood to be solely those of the author(s).