Why Deplatforming Just Isn’t Enough

By Brad Honigberg
Deplatforming — shutting down controversial speakers or speech — has proven to be an effective short-term solution to online extremism. The ‘Big Lie’ of election fraud pushed by former President Trump which culminated with the January 6 insurrection against the U.S. Capitol has profoundly reshaped the social media landscape. With @RealDonaldTrump and other prominent conspiracists no longer able to reach wide swaths of the public, misinformation about election fraud has already fallen dramatically.
Although an imperfect comparison, major technology corporations have begun treating white supremacists and Qanon conspiracists the same way they did the Islamic State and al-Qaeda. After being deplatformed in 2015, Islamic extremists migrated to Telegram and marginal social networking platforms. Then, in November 2019, Telegram and the EU’s main law enforcement body collaborated to clean up the remnants of the movement online. By forcing them into smaller and more clandestine digital locales, the groups ability to recruit, coordinate, and organize real-world violence significantly diminished.
Deplatforming also carries economic repercussions. Propelled by algorithms that favor sensationalism over truth, the online dissemination of rumors, conspiracies, and apocalyptic beliefs on social media have become a lucrative staple of the consumer Internet. Digital “attention vendors” tap into human informational behaviors and cognitive habits, monetize the disinformation, and subsequently profit from it. Whether it be the Islamic State or Alex Jones, disrupting the disinformationists’ revenue model have had a significant impact on their visibility, the maintenance of their fan bases, and the flow of their income streams.
Despite positive short-term trends, it would be naïve to think that the assortment of groups that took part in the January 6 insurrection are no longer a threat to U.S. national security. Rather, they are increasingly feeding off one another in the darkest corners of the Internet. Deplatforming is akin to a medicinal therapeutic in America’s long-term fight against its information disorder; the ailment may feel as though it has passed, but it continues to mutate and metastasize. A broader strategy is necessary in the long-term fight against disinformation and extremism.
Shutting down accounts helps prevent average unsuspecting users from being exposed to dangerous content, but it doesn’t necessarily stop those who already endorse that content. When analyzing data from r/The_Donald and r/Incels, two forums that were removed from Reddit last year and later became their own standalone websites, researchers found a significant drop in posting activity and newcomers. Still, for those that continued to post on these relocated forums, researchers also noted an increase in signals associated with toxicity and radicalization.
The recent migration to “alt-tech” platforms by far-right groups has been a scattershot affair. Even those that were not officially removed from mainstream platforms have moved to smaller networks with little to no content moderation. Parler was poised to benefit from the far-right’s deplatforming had Google and Apple not removed the app from their stores and Amazon not refused to provide its cloud-hosting services. Self-termed “Parler refugees” have been courted by more obscure social platforms. Gab has long been a breeding ground for white supremacist terrorism, but the encrypted messaging app Telegram has seen an especially sharp increase in activity from Qanon influencers who are monetizing their channels through donations and subscriptions.
For the hundreds of thousands of Qanon followers that continue to “trust the plan,” the community’s messianic ideology has become increasingly action-oriented. On these “alt-tech” platforms, far-right extremists have increasingly been targeting despondent Qanon followers struggling to rationalize why the outlandish beliefs of their movement have failed. For example, Qanon influencers have begun promoting the anti-government concepts and adopting language from the Sovereign Citizens movement, arguing that Donald Trump will become president again on March 4. Its supporters argue that the current U.S. government is illegitimate because the United States became a corporation after Franklin Roosevelt ended the gold standard in 1933. Beyond general disdain for government authority, anti-semitism has been one of the most bind agents uniting Qanon believers with more established far-right extremists groups. In its totality, deplatforming could force the decentralized extremist landscape into more hierarchically structured groups that enables leaders to control how violence is orchestrated and how finances are secured and managed.
Historically, corporations have often risked creating a “tragedy of the commons” when they put their short-term self-interests ahead of the good of the consuming public. In the long-term, their pursuit for greater profit can destroy the environment that made them successful in the first place. The majority of deplatforming actions taken by Big Tech to date have come in reaction to national tragedies and negative media attention. Having failed to act proactively, the scourge of digital disinformation and online extremism can no longer be solved through deplatforming alone. Reversing the erosion of institutional trust to foster a free and open Internet will require a whole-of-society effort.
The best way forward is through a multi-stakeholder approach that fosters increased collaboration among technology corporations, the government, and civil society to balance the benefits of free expression with the need to protect citizens and democratic institutions. Self-regulation — steps organizations take to preempt or supplement governmental rules and guidelines — beyond deplatforming will be essential. Efforts to increase transparency with their respective algorithms and ads policies would be a positive next step.
Still, given their unique grip on power in the United States and around the world, it is unclear whether major technology corporations have the will to self-regulate. Thoughtful government efforts that alter the attention-driven economic models employed by U.S. tech giants are critical  to achieve lasting change. With Democratic majorities in both Congressional chambers, the Biden-Harris administration is likely to pursue a slate of reforms related to privacy, market competition, and algorithmic transparency. Bipartisan legislative reforms to Section 230 protections are already under consideration. Congress could also propose carveouts similar to the recent FOSTA-SESTA legislative package from Section 230 liability protections so that social media companies could be held liable for user-generated disinformation or hateful content. Civil society actors — working with the government and technology corporations — must focus on bolstering the digital literacy of its citizenry and elevating civic education as a national security imperative. With democracy on the line, how Big Tech and regulators act today to detoxify the proverbial “market place of ideas” will determine the future of public discourse in the United States.


Brad Honigberg is pursuing a masters degree in Security Studies with a concentration in technology at the Georgetown University Walsh School of Foreign Service. He previously served as Social Media and Outreach Coordinator at the Center for Strategic and International Studies.

The Technology Policy Blog is produced by the Strategic Technologies Program at the Center for Strategic and International Studies (CSIS), a private, tax-exempt institution focusing on international public policy issues. Its research is nonpartisan and nonproprietary. CSIS does not take specific policy positions. Accordingly, all views, positions, and conclusions expressed in this publication should be understood to be solely those of the author(s).