Can Digital Trade Law Curb Disinformation?

Malicious actors spreading disinformation to influence politics and society threaten democracy both at home and abroad, which can foment unrest and disillusionment. The United States and its democratic allies are vulnerable to disinformation spread by actors within autocratic nations such as Russia and China. Russia’s extensive disinformation machine is well documented within its own borders throughout Europe, and even in the United States. With an exceptionally effective disinformation model through the usage of troll farms and malicious internet bots, Russia demonstrated its ability to stoke antidemocratic, anti-international sentiments into popular movements. Brexit in the United Kingdom, the rise of the National Front in the 2017 French presidential elections, and even fringes of the United States’ Republican Party are currently demonstrably under the influence of Russian disinformation campaigns.

Digital trade rules maintain the power to curb the power of disinformation. Binding international agreements, such as the former U.S.-EU Privacy Shield and its successor, the Trans-Atlantic Data Privacy Framework, demonstrated the potential to keep mis/disinformation in check by protecting user data. More commonly, however, different regions and nations have their own regulations, which often do not align. The European Union’s 2022 Digital Services Act could serve as a template for an international regime focused on governing international data privacy and curbing the spread of disinformation. While excellent for consumer privacy and safety, the EU Digital Services Act has the potential to severely impact U.S. and European digital businesses and digital services providers.

Q1: What’s the difference between misinformation and disinformation?

A1: The crucial differentiator between misinformation and disinformation lies with intention. Misinformation refers to false, misleading, or inaccurate content posted without the intention to deceive. Disinformation is purposefully misleading, false, or inaccurate—it is shared to deceive others. Either way, both methods draw in an unsuspecting user through loaded statements, appeals to the user’s emotions and biases, extraordinary claims, or flashy graphics.

Cross-border disinformation threatens to destabilize and disrupt public faith in national institutions. Disinformation can scare internet users with a threat that does not exist; alternatively, some embellish the past to advance a political message or cause. Often, these incidents are not isolated—the same purveyors of disinformation maintain a narrative, most notably in the case of QAnon. In the United States, some politicians echo internet extremist panic over replacement theory. Meanwhile, the Marcos family in the Philippines returned to power in 2022 by creating nostalgia for a golden age that never existed.

Disinformation comes at high financial and institutional costs. According to the European Center for Studies and Initiatives (CESIE), the proliferation of fake news in finance, politics, and healthcare cost $78 billion in damages yearly. As more people begin to believe in personalized social media feeds over traditional media outlets, this damage will only increase. Disinformation cost comes in different forms. Fake news about healthcare, for instance, can strain healthcare systems and cause personal harm to its victims.

The widespread nature of political disinformation also comes at a steep cost to democratic institutions. In more extreme cases, coordinated campaigns result in political violence, exemplified in recent memory through the January 6 Capitol insurrection. Former president Trump’s disinformation narrative of electoral fraud and false claims of victory prompted some 2,000 to 2,500 internet-radicalized rioters to descend upon the U.S. Capitol, leaving aftershocks that politicians and voters alike continue to grapple with. People are losing trust in traditional media outlets; meanwhile, more people trust the information they get from social media. Well-informed voters are gradually replaced by radicalized voters polarized through their algorithmic echo chambers. Consequently, the damage to democracies worldwide could be severe.

Q2: How are misinformation and disinformation and privacy related?

A2: The proliferation of misinformation and disinformation on the internet is directly related to the lack of comprehensive digital privacy law. In its early days, the internet fostered an environment of anonymity; now, an average user cannot easily hide. Virtual private networks (VPNs), while effective tools for protecting one’s digital footprint, require extra configuration that is not always accessible to an average user. The privacy they offer does not come standard for internet users. User-anonymizing browsers like Tor, frequently used to access the dark web, are often too inconvenient for the average user. While it offers near-total anonymity, it slows down the user browsing experience and takes extra steps for a user to install. Moreover, total privacy limits algorithmic ability to customize online services for the user. Users must make the choice between customized, accurate services versus privacy.

E-commerce and social media rely heavily on algorithms designed to measure user engagement and interest in topics. Interactions such as Facebook likes, retweets, and reposts are now accurate methods with which companies can identify information such as sexual orientation, ethnicity, political views, and age, among other metrics. Frequently searched information becomes part of a profile assembled on a user. This information is not limited to data gathered while on a specific website. Cookies track user data even when users are off a social media or e-commerce website.

Based on information gathered on their website and throughout the user’s activity on the internet, digital platforms offer suggestions that cater to that user’s profile. Since users are monitorable wherever they go online, their entire digital life (habits, wants, identity, and interests) are exploitable. There are both harmless and harmful implications to this. For example, if a user searches for baby formula, the algorithm will assume that the user is a parent. Consequently, e-commerce ads display baby-related content—diapers, formulas, strollers, and more. This is one of the more benign ways personal information makes the user targetable by algorithms. These user-targeted algorithms become more dangerous when they focus on political or religious beliefs. If a user recently searched for a political candidate, a political party, or a community based on a political ideology, social media algorithms will suggest similar content for the user to follow. Their timeline or homepage becomes inundated with similar content, creating a political echo chamber as the user sees fewer opposing views. The immersive, echo chamber-like environment perfected by social media platforms fosters increased radicalization. With opposing views inconvenient to access, users have little motivation to fact-check what they consume. The feedback loop of confirmation bias becomes difficult to escape.

Q3: How are privacy and digital trade related?

A3: Different trade agreements, countries, and international bodies define privacy differently. Privacy will be described here as an internet user’s right to manage revealing data. Some definitions of privacy even classify it as a user’s personal property, giving them the right to modify, update, or delete their data at their discretion. Digital privacy greatly benefits the end-user, giving them control over their digital footprint. However, it presents challenges for international corporations and organizations that rely on user data. Data makes digital trade financially lucrative. Without it, companies are unable to accurately predict consumer interest, tailor their algorithms to a personal extent, or adequately respond to prevailing consumer trends as accurately. As such, digital trade and privacy are deeply intertwined. Privacy regulations can hinder the acquisition of valuable user data.

Data is an increasingly important aspect of international trade that has exploded since the early 1990s. It is infinitely reusable, recyclable, and salvageable and has even been likened to the “new oil.” User data, or personal data, is often governed distinctly from commercial data, which is data about companies or legal entities. Personal data is the most lucrative for businesses. Privacy regulations would allow consumers greater control over their personal data; however, they threaten to stymie the flow of the 150,000 GB of data that zips across the global internet every second, cutting corporations off from one of their most precious resources.

Data’s murky status complicates national action. Though many businesses treat data as a commodity, its classification remains unclear legally and philosophically. Some argue that data is property like a house or car is property. Others argue that personal data is an integral part of one’s personhood. Regardless of the debate, the commodification of personal data presents serious implications. Should data become a commodity, potential taxation could cause issues as well. Namely, companies taxed for utilizing users’ data might burden the end-user with the additional costs incurred from data collection taxes.

Q4: What existing data governance regimes exist?

A4: The EU Digital Services Act is entering a crowded field of disparate data governance regimes. The EU-U.S. Privacy Shield, which the Schrems II case invalidated, and its agreed-to-in-principle successor, the Trans-Atlantic Data Privacy Framework, are prominent international data privacy regimes. If both sides can finalize the agreement, the Trans-Atlantic Data Privacy Framework between the European Union and the United States would reestablish legal mechanisms for EU personal data transfers to the United States.

The European Union’s General Data Protection Regulation (GDPR) was the most comprehensive data privacy scheme, becoming the model for many other data protection laws worldwide. However, its user-focused regulations increased the costs of companies doing business in the European Union. Even countries outside the European Union worked to achieve parity and maintain adequacy under GDPR. The EU-Japan mutual adequacy decision, which bridges EU and Japanese data protection laws, further demonstrates how international cooperation on data privacy is possible.

The GDPR presents challenges to businesses operating in Europe. Under its regulations, companies such as Facebook, which rely on user data for advertising revenue and algorithm refinement, faced fines as high as 4 percent of their annual global income for data breaches. Consequently, some companies, such as U.S.-based advertising firm Verve, tech startups, and video game makers, opted to leave the European market. To avoid this outcome, many nations have sought a balanced approach that respects user privacy while keeping businesses in the country.

As such, many countries employ approaches that are not as stringent as that of the European Union. This is exemplified best by the United States’ digital trade chapter in the United States-Mexico-Canada Agreement (USMCA), which leaves the responsibility for data protection to each member country. Within USMCA, countries can implement new data privacy laws, including data transfer restrictions, so long as they are not applied in a discriminatory fashion. Moreover, USMCA countries must continue to recognize the Asia-Pacific Economic Cooperation forum (APEC) Cross-Border Privacy Rules (CBPR) as a data transfer mechanism.

In the Indo-Pacific region, the CBPR, the Comprehensive and Progressive Agreement for Trans-Pacific Partnership (CPTPP), and the Digital Economy Partnership Agreement (DEPA), shape the contours of the region’s data privacy regulation. CBPR, developed by the 21 APEC economies and endorsed by APEC leaders in 2011 is a government-backed data privacy certification, that implements the APEC Privacy Framework. True to APEC’s consensus-based, non-binding model, CBPR does not supplant a country’s domestic privacy laws and regulations.

The CPTPP digital chapter includes a section on personal information protection, wherein participating countries are asked to “adopt or maintain a legal framework that provides for the protection of the personal information of the users of electronic commerce,” allowing for different legal approaches and mechanisms to promote compatibility. 

Q5: How can trade law govern digital privacy and disinformation moving forward?

A5: Individual action by independent nations is insufficient to address the international issue of disinformation. The Digital Services Act, recognizing the increasing role played by online platforms in the lives of Europeans, aims to create a “safer online experience for citizens” by promulgating better consumer services to protect civilian rights. Notably, the act emphasizes transparency on online advertisement and levies responsibilities for “very large platforms” due to their “systemic impact in facilitating public debate, economic transactions, and the dissemination of information, opinions, and ideas. Under the DSA, very large online platforms such as Facebook, Twitter, and YouTube will have to conduct a mandatory annual risk assessment on “systemic issues such as disinformation, hoaxes, and manipulation.” These companies must take risk mitigation measures that toe the line against freedom of expression restrictions, in addition, these measures must be audited independently. The DSA includes a revamped Code of Practice on Disinformation, which aims to “[demonetize] the dissemination of disinformation [and ensure] the transparency of political advertising” while empowering users through cooperation with fact-checkers.

Japhet Quitzon is a program manager and research associate with the Scholl Chair in International Business at the Center for Strategic and International Studies (CSIS) in Washington, D.C.