A Year of Action: Evaluating U.S. Technology Policy ahead of the Second Summit for Democracy
When President Biden opened the first Summit for Democracy in December 2021, he issued a call to action to the 111 nations in attendance: “We stand at an inflection point . . . the choices we make . . . in this moment are going to fundamentally determine the direction our world is going to take in the coming decades.” This statement is especially true when it comes to mitigating any harmful effects of technology on society, which is one of the summit’s primary focus areas. But while it is relatively straightforward to make the case that authoritarian governments should not be allowed to abuse technology, the more complex and nuanced challenge will be to place guardrails around its deployment at home.
When the United States reconvenes the second Summit for Democracy on March 29 and 30, along with cohosts Costa Rica, the Netherlands, South Korea, and Zambia, rising global tensions with China and Russia will likely shape the agenda. Fulfilling a joint “Year of Action” commitment from the first summit, the Biden administration is reportedly preparing to release nonbinding export control principles for democratic governments to restrict the sale of some surveillance technologies to authoritarian governments. This follows previous U.S. actions to limit authoritarian access to domestic technology, such as the Bureau of Industry and Security’s addition of dozens of Chinese companies to the Entity List in July 2021 due to their targeting of the Uyghur and Kazakh communities, and the Department of State’s publication of resources in September 2020 to assist private businesses in conducting human rights assessments before selling certain tools to foreign governments.
Much of the global dialogue has centered around spyware, which allows users to remotely hack encrypted mobile devices and computers and covertly track individuals’ internet usage and private communications. Governments can buy this software to monitor suspected terrorist or criminal groups—along with journalists, politicians, academics, and critics. The Carnegie Endowment for International Peace has recorded almost 200 cases of spyware usage worldwide between 2011 and 2022. These include at least eight documented examples by the Chinese government, as well as several cases by summit invitees. For example, Mexico and India reportedly deployed the spyware maker NSO Group’s Pegasus tool to monitor journalists, researchers, human rights activists, and government critics, and Greece recently targeted a Meta security professional using Predator.
The problem is not just that authoritarian governments can abuse spyware; insufficient domestic oversight of digital development or deployment could raise human rights concerns, too. There have been at least some historical examples of spyware usage by U.S. government agencies, although the full extent is not publicly known. In 2018, the Central Intelligence Agency (CIA) reportedly funded Djibouti’s deployment of Pegasus for anti-terrorism purposes. The Federal Bureau of Investigation (FBI) reportedly tested Pegasus in 2020 and 2021, but abandoned plans to use it. The administration placed the NSO Group and Candiru on the Entity List in November 2021—but, as of January 2023, the Drug Enforcement Administration (DEA) reportedly continued to use Paragon’s Graphite. Several states including Delaware, Arizona, and Illinois reportedly used spyware at some point; back in 2016, an Iowa government official disclosed that the Department of Public Safety used Cellebrite for “any and all crimes.”
In a November 2022 letter to Congress, the administration revealed plans to issue an executive order to stop “U.S. Government operational use of commercial spyware that poses counterintelligence or security risks to the United States or risks of being used improperly.” It not yet clear which spyware vendors will be affected by the forthcoming executive order or what “counterintelligence or security risks” might encompass. Depending on which path the executive order takes, this language could either align with the administration’s traditional efforts to address external national security threats or offer an opportunity to meaningfully prevent any domestic use of spyware, too.
But despite the international focus on commercial spyware, the more difficult challenge would be to address other surveillance methods that are more commonly used across U.S. and other democratic governments, like facial recognition. As of 2021, approximately 20 out of 42 U.S. federal law enforcement agencies employ facial recognition, and the Transportation Security Administration is reportedly planning to expand its use to airports around the country. In addition, over 3,000 U.S. federal and state law enforcement agencies have reportedly purchased access to Clearview AI’s database, including the CIA and FBI. However, facial recognition systems have higher error rates for individuals with darker skin, and can reinforce the over-policing of communities of color. Despite widespread deployment and privacy concerns, there are no direct federal restrictions on law enforcement use of facial recognition.
As part of its Year of Action commitments, the White House released a Blueprint for an AI Bill of Rights in October 2022, which acknowledged that “facial recognition technology . . . can contribute to wrongful and discriminatory arrests” and that “communities should be free from unchecked surveillance.” Yet, it also included a specific disclaimer that these nonbinding principles would not affect existing law enforcement or intelligence activities. At least 17 U.S. local governments, such as Boston and San Francisco, have already imposed some type of ban on facial recognition, and a few technology companies have pledged to at least temporarily pause sales to law enforcement. But while some members of Congress have pushed for a nationwide moratorium for years, their proposed measures have not advanced. The European Union has made progress on a draft Artificial Intelligence Act, which if enacted, could ban real-time facial recognition deployment by law enforcement agencies in public areas with limited exceptions. Compared to spyware, however, there is less consensus on whether—and with what guardrails—democratic governments should allow limited use of facial recognition for national security reasons.
The U.S. domestic Year of Action commitments did not directly address government transactions with commercial data brokers. But since democratic governments are statistically more likely to contract data brokers than spyware vendors, their ability to buy personal information on the open market merits greater global attention as well. In recent years, several U.S. agencies including the FBI, Department of Homeland Security (DHS), and Defense Intelligence Agency (DIA) have reportedly purchased personal information relating to millions of Americans to conduct investigations without a warrant. President Biden issued an executive order in June 2021 that called for federal agency recommendations to prevent sales of sensitive U.S. personal information to foreign adversaries—but largely sidestepped the civil liberties risks that could stem from U.S. agencies that work with data brokers.
Any U.S.-led global forum will inevitably draw attention to domestic affairs. As such, the second Summit for Democracy could provide an opportunity for the United States to emphasize a willingness to strengthen internal accountability, especially at a time when it has been historically slower than partner nations to regulate technology platforms. As one example: even as over 100 countries have updated their digital privacy laws in recent years, the United States still lacks a federal comprehensive data privacy framework, making it is an outlier among most democratic nations invited to the summit.
In addition to surveillance and privacy, the U.S. domestic Year of Action commitments identify a broad set of technological challenges such as digital harassment, disinformation, algorithmic bias, and market consolidation—but, compared to some democratic partners, takes smaller-scale steps to address them. For example, over the past year the European Union enacted the Digital Services Act (DSA), which imposes sweeping new requirements for internet platforms to create user flagging systems, assess the “societal or economic” effects of their services, and improve transparency in content moderation. These actions will directly affect how online platforms respond to unsafe user-generated content. In contrast, the United States formed an interagency working group to study the information ecosystem, issued a public advisory that advocated for a “whole-of-society” approach to address health misinformation, and announced a global partnership to examine online gender-based abuse. These open up much-needed dialogue but do not fix a core problem: that outdated U.S. privacy laws fail to protect individuals from nonconsensual intimate imagery leaks, gendered deepfakes, and online stalking.
There is a reason for the relatively soft scope of actions: the executive branch can only take limited unilateral measures without buy-in from Congress and the judicial branch. In July 2021, the Biden administration issued an executive order that, among other provisions, called for “greater scrutiny of mergers, especially by dominant internet platforms, with particular attention to the acquisition of nascent competitors.” This executive order aligned with its Year of Action goal to address how technology market concentration places significant economic and political power in the hands of a few large platforms. Yet, a federal judge recently denied the Federal Trade Commission’s (FTC) ambitious challenge to Meta’s acquisition of virtual-reality startup Within, which the agency considered a potential future competitor. Shortly after, FTC commissioner Christine Wilson resigned in mid-February, alluding to partisan divisions in several agency leadership decisions, including the one to challenge the Meta-Within acquisition—leaving the statutorily bipartisan agency with only three out of five commissioners, all from the same party.
Although Biden has called on the U.S. Congress to enact more forceful data privacy and antitrust reforms, most recently in his State of the Union address, political partisanship poses structural barriers to any major legislative change. Despite numerous governments devoting historic attention to digital market concentration in recent years, such as the European Union, South Korea, Australia, Japan, and United Kingdom, the 117th Congress declined to enact relevant bills such as the American Innovation and Choice Online Act and Open App Markets Act, which, among other provisions, aimed to prevent large technology platforms from downgrading their competitors’ services and require consumer choice in mobile app transactions. Nor did the previous Congress advance comprehensive federal privacy measures such as the American Data Privacy and Protection Act, which proposed to tackle automated discrimination by requiring private platforms to audit algorithmic systems for disparate outcomes.
Ahead of the first summit, many commentators pointed out that it will be difficult for the United States to “lead by example” when it faces a significant number of problems at home. But as political science professors James Goldgeier and Bruce Jentleson note, scrutiny over U.S. global leadership or foreign policy decisions have inspired the United States to achieve important domestic reforms throughout history. For example, during the Cold War, some politicians believed that progress on racial equity and civil rights could improve the U.S. image abroad. Social equity and ethics alone are enough to justify stronger rules against privacy violations, harmful content, algorithmic discrimination, and anticompetitive activity—but if the upcoming summit places additional international pressure on Congress to enact critical changes, that would be helpful, too.
The Summit for Democracy could thus serve an important agenda-setting function, and its primary audience should be internal to the United States just as much as external. The administration has framed the summit’s goals in multiple ways, both as an introspection on strengthening U.S. domestic policies from within (“not to assert that any one of our democracies is perfect . . . [but] to make our democracies better”) and as a national security imperative against external threats (“authoritarian leaders are reaching across borders to undermine democracies . . . all while sowing disinformation to claim their model is better”). And the administration’s October 2022 National Security Strategy similarly put forward both narratives, acknowledging both the need to improve domestic governance and to counter authoritarian use of spyware and other technologies.
This latter language is reminiscent of another U.S.-driven global collaboration, the Declaration for the Future of the Internet. When the United States led 60 partner countries in signing a set of nonbinding principles on technology and democracy in April 2022, it called out “some authoritarian governments” for restricting internet access, sponsoring malware attacks, and weaponizing disinformation. Signatories also pledged to refrain from using social scoring systems, a seeming reference to the Chinese Communist Party. Although the declaration does not name specific authoritarians, several U.S. officials reportedly characterized it as a strategic response to Russia and China, which have been widely criticized for their engagement with online propaganda, censorship, and surveillance.
Democratic governments have frequently described technology as a beneficial tool that bad actors and authoritarian governments can exploit. But digital tools are not inherently democratic or antidemocratic; rather, they reinforce or accelerate existing trends and biases in society. For this reason, the Summit for Democracy presents an opportunity to move away from traditional protectionist narratives and shift towards more holistic regulatory processes that mitigate risks no matter the user. It is still possible for the United States to demonstrate that democracy is working in the technology space, but it must be willing to hold domestic as well as foreign institutions accountable for their impact on society.
Caitlin Chin is a fellow with the Strategic Technologies Program at the Center for Strategic and International Studies in Washington, D.C.