Photo: ISSOUF SANOGO/AFP/Getty Images

Countering Misinformation with Lessons from Public Health

By Claire Felten and Arthur Nelson - October 1, 2019

Download PDF   |    Back to Issue 18

The internet is often praised as a tool for freedom of speech, democracy, and truth. However, the internet increasingly has become polluted by misinformation—the inadvertent spread of misleading and false information—and disinformation—the deliberate and coordinated spread of misleading and false information. Individuals online knowingly and unknowingly spread dangerous rumors and propaganda at an alarming rate, which can mislead or manipulate the worldview of those who encounter it. False information online can influence people’s opinions and behaviors with profound consequences—like the impersonation of political groups during elections, or the outbreaks of measles in unvaccinated children.i,ii

As the internet becomes ubiquitous, policymakers should develop smart strategies to prepare society for the realities of misinformation. The first step is to identify and counter groups deliberately spreading disinformation, but what does society do with the false information left behind? More concerningly, how does a democracy built on the principle of free speech counter misinformation inadvertently spread by its own citizens, without engaging in censorship?

The tension between free speech and the negative effects of misinformation raise questions about who is responsible for addressing it. No community has taken leadership of the mission of countering misinformation, as social media platforms, governments, and civil society all try to hand off responsibilities. Social media giants like Twitter and Facebook are uniquely positioned to address misinformation because they control data on their platforms, but they continue to underreport misinformation while trying to implement in-house solutions. Researchers and civil society organizations are doing an admirable job revealing the extent of misinformation, but their analyses are often incomplete because they lack unfettered access to data controlled by social media companies. Government should be more involved but concerns about excessive surveillance and First Amendment violations make misinformation a legal and political third rail. Society is working in uncharted waters—controlling the spread of information online has never been a reasonable goal in a democracy, and this is a new problem.

Online misinformation may be new, but the way information spreads across online networks is remarkably similar to the way disease diffuses across people-to-people networks. In the early 1960s, epidemiologists proposed the idea that mathematical models of how disease spreads also could be applied to the spread of a rumor through society, and since then scholars have shown how epidemiological models can be closely applied to the spread of information across social media networks online.iii It makes sense: both disease and information spread via a contact event between two people, one carrying the agent and one naïve. Both can be traced back to a source—a “patient zero.” Like viruses, information spread transcends borders. If the disease or idea is particularly virulent, the rate of spread can be explosive.

Online misinformation may be new, but the way information spreads across online networks is remarkably similar to the way disease diffuses across people-to-people networks.

The many parallels between misinformation and disease offer policymakers an opportunity to look to the field of global health for lessons in how to battle the spread of infectious agents. The global health policy community has iterated through many approaches to epidemic preparedness and response; one high-profile example is the deadly Ebola virus, which has made headlines in the past year as an outbreak tears through the Democratic Republic of the Congo (DRC). The international response has come under scrutiny as officials apply lessons learned during the largest-ever Ebola outbreak, which famously took place in West Africa in 2014, to this new context.

One of the most central lessons the global health community learned in West Africa was the importance of building resilient health systems that are well-resourced and prepared to stop outbreaks before they begin. In 2014, Ebola was able to spread through the region with such ferocity precisely because local health systems were weak and unprepared. Today’s outbreak is also happening in a country with a developing health system—but unlike in 2014, the countries bordering the DRC have invested in building strong health systems, so the few cases of Ebola that have crossed over into other countries in the region have been quickly contained.

This principle of building a resilient system to prevent outbreaks provides a useful model for combating misinformation. Rather than engaging in censorship to counter actors or ideas individually, policymakers and industry should focus on building a resilient information system that is ready to mitigate the effectiveness of misinformation without restricting the free marketplace of ideas. This resilient information system should be modeled on the attributes of a resilient health system: reinforced by an engaged and informed public, designed to protect vulnerable and at-risk groups, and well-resourced with all the tools needed to prevent, detect, and respond to emerging threats. Social media platforms, civil society, and government all must work together to build a resilient information system that can mitigate the spread of harmful misinformation without restricting the free exchange of ideas.

Public Education

Building a resilient system means engaging all members of a community in a comprehensive effort to fight the spread of a pathogen. Ebola outbreak responders have learned that educating the general public on how to prevent illness is just as important to the response effort as healing the sick. When people know how to recognize the signs of infection, what to do if they encounter a sick person, and how to protect themselves against transmission, the community health system is empowered to halt an outbreak in its tracks.


Billboards such as this were effective in West Africa at advertising the symptoms of Ebola and instructing people to bring the sick directly to an Ebola Treatment Center. Photo source: ZOOM DOSSO/AFP/GETTY IMAGES via Boston Globe iv

Similarly, campaigns to build media literacy can teach the public how to protect themselves against misinformation. There is a limit to how effectively technological interventions can protect the public, so educating people on critical assessment of everything they see online, ensuring that people know as much as possible about existing misinformation before they hit social media feeds, and designing peer networks that provide people with relevant sources of social reinforcement all are measures to build resiliency online.

A key attribute of a strong health system is its ability to protect these vulnerable groups. For instance, when an unusual number of children were being infected with Ebola in the DRC, the Ministry of Health held a soccer tournament called “Ebola not in my house” to engage local youth in a conversation about how to protect themselves from the disease.v

Similarly, resilient information systems must be designed to protect the most vulnerable. Media illiteracy is particularly high in certain demographic groups, making them susceptible to misleading and false information online. For example, older generations are more likely be fooled by online misinformation—one study found that those over 65 were three to four times more likely to share junk news online than the 18–29 age group. [vi] Policymakers should strategically partner with existing advocacy groups and coalitions, such as the AARP, to roll out media literacy programs for at-risk groups.

Social media platforms, civil society, and government all must work together to build a resilient information system that can mitigate the spread of harmful misinformation without restricting the free exchange of ideas.

Targeted Intervention

Educating the general public is key to reducing the spread of disease, but epidemiologists analyze case and transmission data to identify geographic hotspots of infection within a community for particularly effective outreach. For example, in October 2018, epidemiologists in the DRC were able to analyze chains of Ebola transmission to identify local healers as hotspots for infection. Mistrust of the World Health Organization-run Ebola treatment centers meant that sick people tended to visit traditional healers, who spread Ebola by not washing their hands between patients. International aid workers reached out to these local healers to educate them on sanitary practices and recognizing the symptoms of Ebola—recruiting them as officers of the response effort.vii

Just as epidemiologists used transmission data to identify hotspots of disease, misinformation workers can use social network analysis —the process of mapping the flow of information across the structure of communities—to identify hotspots of misinformation online. Tracing misinformation back to a hub can help identify isolated online communities that are exposed to, and in some cases responsible for, misinformation. These communities range from isolated networks of users on Twitter, subcommunities on Reddit, or entire fringe websites like Gab or 4chan.

These online communities are often insulated and distrustful of authority. In the ongoing DRC Ebola outbreak, responders have reported that widespread community mistrust and suspicion is severely undermining outbreak response efforts. Trust in authority has been shattered by years of brutal conflict in the DRC, so many local people believe that Ebola is a Western hoax to oppress the Congolese. This mistrust caused many people to refuse treatment or vaccination, contributing to the spread of disease.

Similarly, insular online communities are particularly vulnerable to misinformation because they are resistant to and dismissive of counterfactual information from sources of authority. In a closed network, only a small amount of information is spread and consistently reinforced by peers. Social media algorithms, which tend to feed users content they like and agree with, can exacerbate these feedback loops and reinforce entrenched ideas. Misinformation workers need to preemptively pierce echo chambers by providing new information and new sources that challenge the community consensus. Research shows that people respond better when they are exposed to multiple sources of information outside of their echo chamber—which increases the likelihood they will accurately assess the credibility of information—rather than being directly told that they are wrong.viii

Instead of censoring information through fact-checking and debunking, online intervention should focus on providing users with multiple sources of information and letting them come to their own conclusions. For example, Moonshot, a startup focused on countering violent extremism online, focuses on “off ramping” vulnerable individuals by connecting at-risk users with new sources of information and resources that they might not have otherwise been exposed to online. Social media companies also could contribute by adjusting their algorithms to ensure that people are exposed to content slightly outside of their ideological comfort zone or suggesting that people follow other users who are outside of their usual networks. These ideological nudges could significantly lessen the effect of echo chambers.

However, online networks that are “hotspots” of misinformation are usually too large for officers to individually offramp each user. Instead, officers can use social network analysis to precisely identify two groups of users who are particularly likely to spread misinformation in the future: online leaders and their engaged followers, meaning users who are engaging positively with—but not yet spreading—misinformation.

Leaders are influential accounts that play a central role in shaping the flow of information across online communities. This is particularly pronounced on Twitter: for example, the Twitter user @jackposobiec played a central role in spreading the #Pizzagate conspiracy, a theory alleging that prominent Democratic leaders were running a pedophile ring in the basement of a Washington, D.C. pizza restaurant.


This figure is a retweet network graph showing the wide influence of user @jackposobiec in the spread of the Pizzagate story on Twitter.
Each node represents a Twitter account, and each edge represents a retweet. Larger names indicate more influence.
Photo credit: Nicolas Vanderbiest.

Reaching out to these “superspreaders”—usually moderators or influential accounts—in the same way public health officers reached out to traditional healers in the DRC is an effective way to influence the spread of conspiracies online. Online moderators enforce community rules, remove inappropriate content, and police behavior, so they are well positioned to promote a healthy flow of information through the online system. Often, online leaders don’t realize the extent or implications of their reach; by showing them the impact of their influence, they may be more likely to self-moderate.

The second group misinformation workers should target are engaged followers who are interacting—through “likes,” follows, or comments—with misinformation, and therefore are at the greatest risk of becoming spreaders. For example, researchers identified a group of at-risk accounts in an AIDS-denialist community on the popular Russian social media platform VK.com who followed influential denialist accounts and were thus more likely to internalize the misinformation.ix As the researchers argued, identifying these followers will “significantly reduce the target audience for possible intervention campaigns.” x

Countering Emerging Threats with Pro-active Investment in R&D

When it comes to countering emerging threats like Ebola, a resilient health system requires a host of tools, including vaccines, medicines, and diagnostics, to fight the disease. One of the biggest lessons from the Ebola outbreak of 2014 was that the global health community had not invested enough into preparing these tools in advance because there had been no market incentive for pharmaceutical companies to invest in a disease of poverty like Ebola.

In much the same way that the flu vaccine is developed in advance of flu season to get ahead of the virus, today there are extensive efforts across the global health community to ensure that vital tools to fight emerging pathogens are invested in and developed before an outbreak. For example, the U.S. government recently began developing a vaccine for Marburg virus, a deadlier cousin of Ebola which could cause a devastating future outbreak. Vaccine development takes years, so even though there have been only 16 cases of the disease worldwide in the past decade, the United States is investing now, rather than after a Marburg outbreak has begun.

In the same way, misinformation will rapidly evolve beyond our current ability to counter it, and a resilient online information system will need to continually develop new tools to address evolutions of misinformation and disinformation. New technologies will allow harmful misinformation to diffuse rapidly, even as government or social media platforms roll out changes to combat it. Machine-learning algorithms will be able to create video forgeries—known as “deep fakes”—that “will be able to fool the untrained ear and eye.”xi Artificial intelligence (AI)-enabled botnets will be able to target and converse with vulnerable people online without revealing that they are not human.xii People deserve to know if the images they are seeing or people they are talking to have been falsified.

Fortunately, emerging technology also offers new ways to combat misinformation and disinformation. Researchers are leveraging AI to identify automated accounts and inauthentic content at scale. The same natural language processing models used to automate the mass production of misinformation can be repurposed to detect it in the wild.xiii

A resilient online information system will need to continually develop new tools to address evolutions of misinformation and disinformation.

AI-enabled chatbots also can be added to online conversations to detect false misinformation shared in online conversations and intervene by debunking it or providing corrections. However, it’s unclear how effective it is for chatbots to debunk in real time. In Taiwan, a chatbot named Meiyu has received pushback for making some conversations more awkward when it contradicts family elders.xiv Bots that are too confrontational may be less frequently adopted by users. Rather than debunking, chatbots could chime in with other sources of information when they detect misinformation. Such tools could take inspiration from the browser extension Balancer, a tool that measures the ideological tilt of a user’s news consumption and offers suggestions that may challenge their views.

Yet, just as pharmaceutical companies have insufficient incentives to develop vaccines before an outbreak, social media companies have insufficient market incentives to develop tools to address future misinformation challenges. Users are unlikely to hold social media platforms accountable, or unwilling pay for solutions themselves, because they are often unaware when they are affected by misinformation.

Despite its importance in building online resilience, no organization with the central mandate of developing tools to combat harmful misinformation yet exists. If social media companies have no incentive to develop tools, who will? Multilateral organizations like the Coalition for Epidemic Preparedness Innovation (CEPI) have been instrumental in working to push vaccines through the expensive and lengthy development pipeline before an outbreak of disease.xv The creation of a similar organization for misinformation could fill a gap by providing thought leadership on the cutting edge of innovation for countering misinformation. One candidate is the Defense Advanced Research Projects Agency (DARPA), which has already committed $68 million in the last two years to developing scalable ways to detect deep fakes.xvi

Policymakers should recognize the consequences of not addressing emerging technologies that enable the spread of misinformation and disinformation and must pro-actively invest in technical solutions to counter new threats. Tools to counter misinformation need to be easy to use, scalable, and distributed to at-risk online communities. Pro-active investment in R&D will prepare society’s information ecosystem to prevent, detect, and respond to misinformation.

Recommendations for a Resilient Information System

Coordinate media literacy and awareness programs to educate society about the risks of misinformation, subliminal advertising, and polarization. Curriculum should remain politically neutral but encourage best practices like source-vetting and comparing sources. Social media companies also could adjust algorithms to de-weight automated activity and expose users to content outside of their ideological sphere. Specific programs should be delivered to the most vulnerable demographics by trusted advocacy groups such as the AARP.

Establish an international coalition that can bring together stakeholders from across democracies, modeled after the multilateral Global Health Security Agenda (GHSA) forum that works to build countries’ capacity to prevent, detect, and respond to infectious disease threats. GHSA members split into task forces, called “action packages,” which ensure countries are adhering to internationally agreed-upon capabilities to respond to outbreaks. The misinformation coalition could form these task forces as a forcing function to bring together governments, social media companies, and civil society to align efforts and facilitate information and data sharing. A large-scale multilateral effort also could serve as a platform for society to establish goalposts for levels of intervention consistent with democratic norms.

Institute an NGO to intervene in at-risk communities online. Functioning like an online Red Cross or Doctors Without Borders, this NGO would be best positioned to provide intervention at the individual level while remaining politically neutral. Social media platforms could provide data and network analysis without intervening at the individual level themselves. Government could provide funding but stay distant to avoid undue surveillance or censorship. Intervention should include both personal outreach to online leaders and increasing all individuals’ exposure to new information so that followers can come to their own conclusions.

Invest in R&D to address evolving threats. Government should publicly fund public research into countering evolving misinformation and disinformation threats. Government also should provide financial incentives for social media platforms to develop counter-disinformation tools that can be built into platforms or distributed to individual users at scale.

Ultimately, tackling the challenges of disinformation will require a multidisciplinary approach. Drawing parallels between countering disinformation and public health will not provide a foolproof resolution; it’s not a perfect analogy, because whereas everyone agrees that Ebola is bad, ideas and opinions are not so easily condemned. However, it helps expand our imagination and provides a new framework to analyze disinformation challenges. Democracies must commit to a whole-of-society approach that protects the resiliency of their information systems as the democratic alternative to restricting free speech.

Claire Felten is a former research intern with the CSIS Global Health Policy Center and Arthur Nelson is a program coordinator and research assistant with the CSIS Technology Policy Program.


Full citations available in PDF   |   Download PDF   |    Back to Issue 18