AI Regulation: Europe’s Latest Proposal is a Wake-Up Call for the United States

Although President Joe Biden has made clear he wants to work with Europe and Asia to “defend our shared values and advance our prosperity,” he has not prioritized reciprocal trade agreements in that objective. Too bad. In the meantime, however, there are other strategic efforts the administration can undertake with bipartisan congressional backing to support U.S. jobs and innovation. These initiatives will require putting some flesh on the bones of the president’s rhetorical goal of working more closely with allies and partners. Two recent CSIS studies, one on the EU Digital Services Act and Digital Markets Act and another on artificial intelligence, suggest that the administration center transatlantic discussions on the high-tech regulatory matters crucial to the competitiveness of U.S. tech companies in Europe and also relevant to strategic competition with China.

The April 21 proposal by the European Commission for a regulatory framework to monitor AI is a watershed in tech regulation and marks a good time for the United States and Europe to engage in a dialogue on their respective approaches to its regulation.

AI technologies can improve healthcare, reduce carbon emissions, increase sustainable crop yields, and advance economic growth, among other benefits. But AI also has the potential to be used as a tool for repression, surveillance, violation of privacy, and institutionalized bias. Because of the opportunities and risks, both Europe and the United States are grappling with how best to monitor and regulate the development of AI technology and its applications. Policymakers will need to enact appropriate safeguards to protect the public against harm without stifling innovation that will enable a myriad of future public benefits. The challenge—particularly for the European Union, which is home to only six of the top 100 AI startups worldwide—will be to develop lighter-touch, adaptable regulations that facilitate rather than impede positive innovation and the uptake of AI.

AI technology is a fundamental battleground in the geopolitical competition with China to “win the twenty-first century.” The United States and Europe have a shared interest in holding the line against China, which seeks to export its intrusive model of data governance and AI regulation—a model anchored in state control of all information and communication, draconian surveillance, data localization, and other protectionist and autocratic practices. To succeed, Europe and the United States should agree on a basic framework of topline, democratic, regulatory principles for AI that can be promoted with trading partners in the Asia-Pacific, where China is proselytizing its model as an element of the Belt and Road Initiative. China is aggressive in outreach on behalf of its model, as evidenced by the narrow scope of digital trade provisions in the recent Regional Comprehensive Economic Partnership trade deal, in contrast to the Comprehensive and Progressive Agreement for Trans-Pacific Partnership. If Europe and the United States work together to promote a harmonized alternative approach to China’s authoritarian model of tech regulation, innovation, economic growth, and personal freedom will be enhanced.

Europe’s AI Approach: Setting the Rules of the Future

Five years after the European Union adopted the General Data Protection Regulation (GDPR), its landmark data protection and privacy law, and three years after outlining a common European approach to AI, on April 21, 2021, the European Commission transmitted a draft proposal, the Artificial Intelligence Act (AIA) for an EU-wide legal regime for AI to the European Parliament. While there will be lengthy debate over several years in the Parliament and among member governments, the document itself is a significant milestone toward a European template to govern AI.

Developing the AIA was an exhaustive process, with stakeholder consultations including more than 1,200 questionnaire responses and 133 written comments. Commission deliberations were more open to input from interested parties, including business, than many expected. The Commission seems to have taken on board, to varying degrees, some concerns expressed by stakeholders to the Commission’s white paper on AI, released in February 2020. It is evident that other strongly expressed views were set aside after much consideration.

In recognition that ex ante regulation is burdensome, difficult to implement, and would likely suppress the success of Europe’s digital startups, the proposal adopts a proportionate, risk-based approach that only imposes ex ante regulatory procedures on AI systems classified as “high risk.” Furthermore, total prohibitions are only applicable to a narrow set of specific applications of “AI incompatible with EU values,” such as real-time, remote biometric identification in public spaces for the purpose of law enforcement. Even this ban lists a few exceptions in which such use could be considered “strictly necessary” and allowed. While declining the request of digital rights groups to include a moratorium on all facial recognition technology, the proposal does classify as high risk any “AI systems intended to be used for the ‘real-time’ and ‘post’ remote biometric identification of natural persons.”

One of the most prevalent concerns from stakeholders in response to the white paper was the lack of clarity in the definition of high-risk AI applications, which would be subject to more stringent regulations. The white paper suggested that the determination of whether an AI application would be considered high risk should be based on two cumulative criteria: if it is in a sector where significant risks are expected to occur, and if the AI application is used in such a manner that significant risks are likely to arise. Moreover, the white paper suggested that there could be “exceptional circumstances” in which even more applications would be subject to high-risk rules. Stakeholders expressed concern that the ambiguity of the classification of high-risk sectors and applications would lead to uncertainty among businesses over whether they would be subject to regulations. This could ultimately stifle innovation in the European Union, as entrepreneurs and investors might choose to ramp up in other jurisdictions to avoid the uncertainties and extra threats to viability posed by EU regulation, including the future prohibition of certain applications.

Taking note of these concerns, the Commission established a framework for determining whether an AI application poses a significant risk, which would subject it to additional obligations, including a conformity assessment, auditing requirements, and post-market monitoring. First, any AI systems that are either a product or safety component of a product already subject to an EU conformity assessment, such as products in financial services, medical devices, machinery, and toys, will be considered high risk. Second, the proposal lists eight stand-alone sectors “with fundamental rights implications” for which any AI systems would automatically also be considered high risk. These eight sectors are:

1. Biometric identification and categorization of natural persons;

2. Management and operation of critical infrastructure;

3. Education and vocational training;

4. Employment, workers management, and access to self-employment;

5. Access to and enjoyment of essential private services and public services and benefits;

6. Law enforcement;

7. Migration, asylum, and border control management; and

8. Administration of justice and democratic processes.

Article 71 of the proposal directs EU members to establish rules for penalties and administrative fines of up to 6 percent of a company’s total worldwide annual turnover for the preceding financial year for infringements of the regulations. To ensure that the regulations can adapt to emerging applications, the proposal would empower the Commission to expand this list of high-risk areas without going through a legislative process, according to specified criteria and a risk assessment methodology set forth in Article 7 of the draft proposal.

The Commission’s draft proposal also lists required practices for data governance and data management. It states that training, validation, and testing data sets should be “sufficiently relevant, representative and free of errors and complete in view of the intended purpose of the system.” In addition, the proposal aims to address discrimination within AI systems, requiring providers of high-risk AI systems to describe the persons or groups of persons on which the AI systems are intended to be used. Training, validation, and testing data sets should take into account “the features, characteristics or elements that are particular to the specific geographical, behavioral or functional setting or context within which the AI system is intended to be used.”

To a U.S. observer, it is hard to imagine the practical application of this last sentence. Additionally, it remains unclear whether requirements for storing personal data conflict with GDPR requirements to protect users’ data by deleting as much data as possible. In an attempt to address this concern, Article 10(5) states:

The providers of [high-risk AI] systems may process special categories of personal data referred to in [the GDPR], subject to appropriate safeguards for the fundamental rights and freedoms of natural persons, including technical limitations on the re-use and use of state-of-the-art security and privacy-preserving measures, such as pseudonymisation, or encryption where anonymisation may significantly affect the purpose pursued.

Pseudonymization, a term of art for making personal data anonymous under GDPR, is one concept that will be key to whether interactions between obligations under GDPR and other EU regulations will slow the successful uptake of AI in Europe. Providers of high-risk applications will be required to maintain copious logs of inputs and outputs of data to be disclosed to regulators on demand, but the particular circumstances for required disclosure seem vague. These two provisions are examples of elements of the proposal where regulatory experts in Europe and the United States would clearly benefit from an exchange of views and perspectives on whether the AIA, as drafted, will actually achieve its intended objectives.

Three other European regulatory proposals—the Digital Governance Act (DGA) released by the European Commission on November 24, 2020; the Digital Markets Act (DMA); and the Digital Services Act (DSA), the latter two released on December 15, 2020—fill out a picture of an extraordinarily expansionist regulatory agenda, powered by Europe’s aspirations for “digital sovereignty” that, when implemented, will require fundamental changes to the business practices of U.S. digital champions and other firms serving the European market.

The EU Commission aims to use the size of the European market as leverage to propagate its approach to the regulation of AI in other markets. With momentum gained from the successful extraterritorial application of GDPR protections, one can see the clear outlines of the next “Brussels Effect.” AI regulation, combined with other initiatives like the DGA, DMA, and DSA, promises to be a much broader assertion of extraterritorial EU control over how businesses engage with consumers and deploy digital technology.

The U.S. AI Approach: Convergence with Europe?

While Europe is moving quickly to craft concrete proposals for the EU-wide regulation of data, digital services, and AI, the United States has followed a slower and more fragmented approach. Various government offices—including the White House’s Office of Science and Technology Policy, the National Institute of Standards and Technology, and the Department of Defense Innovation Board —have outlined positions and principles for a national framework on AI. But the only laws putting guardrails on AI in the United States are at the state level. At the federal level, the Federal Trade Commission (FTC) issued guidance last year emphasizing the transparent, explainable, and fair use of AI tools. The FTC issued further guidance in April 2021 warning companies against biased, discriminatory, deceptive, or unfair practices in AI algorithms. The National Security Commission for Artificial Intelligence’s March 2021 Final Report urged the adoption of a cohesive and comprehensive federal AI strategy.

In December 2020, as part of a new transatlantic agenda, EU leaders proposed a joint U.S.-EU Trade and Technology Council to engage with the Biden administration in a dialogue on, among other topics, “cooperation on regulation and standards.” The Biden administration has yet to respond formally to that invitation, although a response is expected when the president meets with European leaders in June.

For its part, Congress, largely inactive on the legislative front during the China trade war years of the Trump administration, is preparing a lengthy piece of bipartisan legislation aimed at countering China. S. 1169, the Strategic Competition Act, directs the U.S. government to actively compete with China for influence around the globe and in international fora, including technical bodies. A primary objective of the bill is “to ensure that the United States leads in the innovation of critical and emerging technologies, such as next-generation telecommunications, artificial intelligence, quantum computing, semiconductors, and biotechnology.” It outlines the need for allies and partners to align “with the United States in setting global rules, norms, and standards” in order to counter what it calls “digital authoritarianism” or “the expanding use of information and communications technology products and services to surveil, repress, and manipulate populations.” In its length of 200 pages and tone, the bipartisan bill is a telling illustration of much China’s image has suffered in the eyes of Congress and the U.S. public. The bill is also a sign of the increasing recognition in Congress of the strategic economic and security importance of “harmonizing technology governance regimes with partners” to sustaining U.S. international leadership in the face of Chinese competition.

National Security Advisor Jake Sullivan echoed a theme compatible with the Strategic Competition Act on Twitter : “We will work with our friends and allies to foster trustworthy AI that reflects our shared values and commitment to protecting the rights and dignity of all our citizens.” Sullivan said he welcomes the draft EU regulatory document on AI. This should help empower European and U.S. AI regulators, both of which are struggling with the same fundamental issues and tradeoffs, to energetically engage in transatlantic discussions on AI regulation that result in concrete convergence.

A meeting of minds will not be easy. As momentum in Europe for digital regulation steams ahead, the United States must ask itself how much longer it can afford to sit on the sidelines and leave Europe to become the preeminent global standards-setter in this realm. For AI in particular, the consequences of inaction will be felt by both businesses and technology users if China and Europe continue unchallenged along dual paths to externalize their respective authoritarian and regulatory governance models. Importantly, the administration and Congress both see the value in working closely with the European Union on broad principles for governing the safety of AI. Bipartisan unity in Congress to counter China will better position the United States to take a role in shaping an AI regulatory framework that, for better or worse, will be governing U.S. cutting-edge industries doing business in Europe, and perhaps globally.

Meredith Broadbent is a senior adviser (non-resident) with the Scholl Chair in International Business at the Center for Strategic and International Studies in Washington, D.C. Sean Arrieta-Kenna is an intern with the CSIS Scholl Chair.

Commentary is produced by the Center for Strategic and International Studies (CSIS), a private, tax-exempt institution focusing on international public policy issues. Its research is nonpartisan and nonproprietary. CSIS does not take specific policy positions. Accordingly, all views, positions, and conclusions expressed in this publication should be understood to be solely those of the author(s).

© 2021 by the Center for Strategic and International Studies. All rights reserved.
Image
Meredith Broadbent
Senior Adviser (Non-resident), Scholl Chair in International Business

Sean Arrieta-Kenna

Intern, Scholl Chair in International Business