Two Western Roads to Regulating Artificial Intelligence

Artificial intelligence (AI), as defined in a concurrent resolution introduced recently by Representatives Will Hurd (R-TX) and Robin Kelly (D-IL), is the ability of a computer system to solve problems and perform tasks that would otherwise require human intelligence. Based on several years of solid research and analysis, the resolution lays out principles that should guide the United States’ AI strategy.

Technology leaders in many fields agree: looming global challenges, such as developing methods to reduce plastic in the oceans, finding a vaccine to treat the Covid-19 pandemic, stemming emissions that cause climate change, and finding safe navigation methods for self-driving cars, will be tackled through new, innovative AI and machine learning tools. Moving forward to solve these urgent challenges will be done best with the benefit of multilateral perspectives, collaborative research, and less fragmentation of technology regulation along national lines.

When contemplating the future of AI, U.S. regulators in both the Obama and Trump administrations have shown a preference for using existing regulatory frameworks where possible and for developing voluntary standards of responsibility that are drafted through public engagement, with the professed goal of improving quality of life in an ethical manner that guards against bias, promotes fairness, and protects privacy. Compared with energetic, tech-skeptic trends in Europe, U.S. government officials display a certain humility and restraint in trying to regulate new technologies too soon, before they are understood or even imagined. The U.S. view is that innovation will flourish in a transparent and predictable regulatory environment with benefits weighed against costs in the process of developing the rules. There are a small number of catastrophic risks, such as maintaining the security of self-driving car networks or the electrical grid, that could require ex ante regulation, but the numbers of these cases are relatively small and are better addressed through narrow, specific regulations rather than sweeping generalized rules.

The Hurd/Kelly resolution calls for leveraging national security alliances to promote democratic principles, foster research collaboration, and develop common standards with respect to AI. It urges the administration to promote interoperability of AI for the purpose of strengthening alliances and for the United States to lead in global standards-setting.

In tandem with the debate over the future of AI in the United States is a similar, although more frenetic, debate in Europe. With a striking confidence in its ability to regulate, the European Union has put in place a more aggressive regulatory regime, through ex ante regulatory procedures of requiring government permission up front before innovative technologies are deployed. Earlier this year, the European Commission released a voluminous series of documents anticipating Europe’s regulation of artificial intelligence. These documents communicate a sense of distrust of new AI technologies and an urgent desire to insert government control in an effort to stem imagined and yet-to-be imagined dangers. Concern that the Commission’s approach is overly prescriptive and too generalized, and that AI has too many applications and forms for a one-size fits all regulation, is widespread.

Following the 2016 General Data Protection Regulation (GDPR) model of moving first with comprehensive, extra-territorial regulation, the Commission is aiming now to be the preeminent global standards-setter in AI. A look at lessons learned from GDPR implementation should persuade Europe to take a more cautious approach, given the practical challenges of effectively regulating modern technology. Although arguably much less complex than the AI regulation currently being drafted by the Commission, the experience of implementing the GDPR has not been smooth.

Enforcement of the GDPR is carried out at the member-state level by the 27 national data protection authorities (DPAs), augmented by the European Data Protection Board in instances of cross-border cases. Differing approaches to national legislation necessary to implement certain aspects of the GDPR has generated fragmentation, harming innovation and cross-border growth in the technology space.

At present there is a recognized bottleneck in GDPR enforcement. Most large cases are diverted to Ireland based on the country of corporate domicile. The experience of the GDPR has also led to suggestions that the regulation has disadvantaged small and medium-sized enterprises, which have much less capacity to ensure compliance with the GDPR compared to larger companies.

The capacity and expertise necessary for enforcement of the GDPR remains a fundamental issue, made worse by the budget-draining effects of the Covid-19 pandemic, which has generated uncertainty about the application of the GDPR, its effectiveness, and efficiency. Member states do not have enough resources, both in terms of money and technical expertise, to devote to their DPAs. According to a report from internet browser company and privacy advocate Brave, only six of Europe’s 28 DPAs have more than 10 tech specialists, most have budgets under 5 million euros, and one-third of EU tech specialists work for German authorities. The Irish authorities tasked with overseeing Google and Facebook face increasing GDPR complaints alongside a shrinking headcount and budget.

The approach taken by the Commission in its AI White Paper risks repeating some of the GDPR’s deficiencies. The Commission’s plans for new AI regulation lack detail and have raised thousands of questions in the public comment phase. For example, the private sector has bristled at some ex ante requirements envisioned by the Commission, such as the need to turn over training data, algorithms, and programming history for audit. There are serious concerns about how data protection requirements under the GDPR will work in tandem with AI applications that require flexible access to a wide variety of data sets. Many organizations who submitted comments said that the new legislation, by increasing the cost and legal difficulties of using AI at an early stage, will reduce the capacity of EU firms to innovate applications of AI in league with the United States and China.

The experience with the GDPR suggests that some deference to member states is inevitable, even though it will create fragmentation in implementation and uneven enforcement, with certain member states assuming more of the regulatory burden. Fragmentation is more likely in the case of AI given its complexity and ability to take on various forms and functions throughout the economy. The Commission should prioritize harmonization among member states in their implementation of new AI rules, including by establishing principles in areas that require member state action.

So, criticism surrounding the lack of capacity to enforce the GDPR, the scope of regulation envisioned in the White Paper, complexity of AI applications, and ambiguity surrounding the mechanics of enforcement, particularly for “high-risk” (i.e., still-undefined) AI applications, raises serious questions about Europe’s capacity for timely and responsible enforcement of new AI regulations. Such regulations and any accompanying member-state legislation should be coupled with the resources necessary to ensure enforcement is even-handed, timely, and responsibly done.

Capacity is of particular importance given the possibility that ex ante conformity assessments will be required for “high-risk” AI applications to enter the European market. Lack of financial resources and technical expertise could drastically slow innovation and lead tech entrepreneurs and investors to consider the European Union as a secondary market, not a primary location to scale and build new applications. Public perception of a lack of expertise could undermine the European Union’s ambition to build an AI ecosystem that will earn the trust of its citizens.

It is unclear if the House of Representatives will take up the Hurd/Kelly Resolution or whether the resolution sums up where the approximate consensus is on approaching the complicated area of regulating new technologies. Approval of the resolution would be a useful first step in determining if the United States and Europe can productively exchange views on the best ways to approach an area of regulation that is truly a global challenge. Right now, the differing roads that the United States and Europe are on do not reflect the benefit of a dialogue on what could be best practices for regulating the new frontier of AI.

Meredith Broadbent is a senior adviser (non-resident) with the Scholl Chair in International Business at the Center for Strategic and International Studies in Washington, D.C.

Commentary is produced by the Center for Strategic and International Studies (CSIS), a private, tax-exempt institution focusing on international public policy issues. Its research is nonpartisan and nonproprietary. CSIS does not take specific policy positions. Accordingly, all views, positions, and conclusions expressed in this publication should be understood to be solely those of the author(s).

© 2020 by the Center for Strategic and International Studies. All rights reserved.

Meredith Broadbent
Senior Adviser (Non-resident), Scholl Chair in International Business