What’s Ahead for a Cooperative Regulatory Agenda on Artificial Intelligence?

Available Downloads

In her first major speech to a U.S. audience after the U.S. presidential election, European Commission President Ursula von der Leyen laid out priority areas for transatlantic cooperation. She proposed building a new relationship between Europe and the United States, one that would encompass transatlantic coordination on digital technology issues, including working together on global standards for regulating artificial intelligence (AI) aligned with EU values. A reference to cooperation on standards for AI was included in the New Transatlantic Agenda for Global Change issued by the Commission on December 2, 2020. In remarks to Parliament on January 22, 2021, President von der Leyen called for “creating a digital economy rule book” with the United States that is “valid worldwide.” Some would say Europe’s new outreach on issues of tech governance and the suggestion of establishing an “EU-U.S. Trade and Technology Council” is incongruous to the current regulatory war being waged against United States firms in the name of unilateral European tech sovereignty.

On November 17, 2020, just one day before President von der Leyen’s first speech on the topic, the White House Offic­­­­e of Management and Budget (OMB) released a much-awaited set of government-wide policy principles for regulating AI, which includes a call for engaging in the development of regulatory approaches through international cooperation. OMB directed agencies to initiate “dialogues” to promote compatible regulatory approaches to AI and to promote U.S. AI innovation, while protecting privacy, civil rights, civil liberties, and U.S. values. Such discussions, the guidance says, “can provide valuable opportunities to share best practices, data, and lessons learned, and ensure that the United States remains at the forefront of AI development. They can also minimize the risk of unnecessary regulatory divergences from risk-based approaches implemented by key U.S. trading partners.”

When contemplating the future of AI, U.S. regulators in both Democratic and Republican administrations have shown deference to using existing regulatory frameworks where possible and for developing voluntary standards of responsibility that are drafted through public engagement, with the professed goal of improving the quality of life in an ethical manner that guards against bias, promotes fairness, and protects privacy. Compared with energetic tech-skeptic trends in Europe, U.S. government officials display a certain humility and restraint in trying to regulate new technologies too soon before they are understood or even imagined. The U.S. view is that innovation will flourish in a transparent and predictable regulatory environment with benefits weighed against costs in the process of developing the rules. There are a small number of catastrophic risks, such as maintaining security of self-driving car networks or the electrical grid, which could require ex-ante regulation, but the numbers of these cases are relatively small and are better addressed through narrow, specific regulations rather than sweeping generalized rules.

Technology leaders in many fields agree: looming global challenges, such as developing methods to reduce plastic in the oceans, finding a vaccine to treat the next looming pandemic, stemming emissions that cause climate change, and finding safe navigation methods for self-driving cars, will be tackled through new innovative AI and machine learning tools. Solving these urgent challenges will be done best with the benefit of multilateral perspectives, collaborative research, and less fragmentation of technology regulation along national lines.

The United States and Europe, which both face the same threat of growing authoritarian dominance of the internet by China, could build a valuable strategic partnership that would support the scaling of AI capabilities of Western economies. Unfortunately, the array of tech sovereignty legislation working its way through the European Commission appears to be pivoting Europe away from the United States in a dramatic manner, making this outreach by President von der Leyen all the more timely for U.S. policymakers to consider. Both sides of the Atlantic need to work hard to bridge a more compatible center of regulatory ambition for AI if the innovation ecosystem in Europe is to continue to grow dynamically and the West is to maintain its strategic leadership in AI vis à vis China.

Overview of the EU White Paper on AI

The European Union has embarked on putting in place an aggressive regulatory regime for AI, through ex-ante regulatory procedures that require government permission upfront before innovative technologies are deployed. There is widespread concern that the Commission’s approach is overly prescriptive and too generalized and that AI has too many applications and forms for a one-size-fits-all regulation.

On February 19, 2020, the European Commission published a white paper, On Artificial Intelligence - A European approach to excellence and trust. The white paper provides a foundation for legislative proposals expected in the first quarter of 2021. This white paper and related documents communicate a sense of distrust of new AI technologies and an urgent desire to insert government control in an effort to stem anticipated and unforeseen dangers.

Following the General Data Protection Regulation (GDPR) model of moving first with comprehensive regulation, the Commission is taking aim at being the preeminent “global-standards setter” in the area of artificial intelligence.

The Commission is convinced that international cooperation on AI matters must be based on an approach that promotes the respect of fundamental rights, including human dignity, pluralism, inclusion, nondiscrimination, and protection of privacy and personal data, and it will strive to export its values across the world.

— From On Artificial Intelligence white paper

The documents make clear that the European Union views privacy and user rights as fundamental to AI regulation, that it seeks to lead global regulatory efforts in the AI space, and that it sees regulation as a way to capitalize on its strong competitive position in B2B industries while shoring up Europe’s weakness in developing large online digital platforms. The fundamental conviction that a regulatory framework will boost Europe as an innovation leader in the data economy—while unsubstantiated—seems to clearly pervade the Commission’s consideration of new legislation.

In light of the AI white paper, it appears that the Commission is moving forward with broad, horizontal, and relatively intrusive regulation of AI applications. Ex-ante conformity assessments to control access to the EU market for AI applications originating outside of the EU are proposed. The Commission is also considering data quality and traceability requirements that would require non-EU firms to train AI applications on GDPR compliant data as a condition of market access in the European Union. Sectors that will likely be impacted by EU regulation of AI are healthcare, transportation/autos and parts producers, energy, servic­­­es that rely on consumer data, the public sector, and more.

To complement the AI white paper, the European Commission released a set of documents on the regulation of AI, including:

  1. Communication on a European Strategy for Data, in which the Commission outlined the importance of data for economic development and the decision to invest in High Impact Projects to fund “AI ecosystems” related to the development of data spaces and cloud services;

  2. Shaping Europe’s Digital Future, in which the Commission itemized all key actions to be undertaken by the European Union in order to ease data flow across the Union while enacting proper regulation to maintain the strength of democratic institutions and free-market competition, especially in developing sectors such as AI and cryptocurrency; and

  3. Report on the Safety and Liability Implications of AI, the Internet of Things and Robotics, in which the Commission reviews gaps in product safety legislation that do not adequately address risks such as cyberattacks due to connectivity, autonomous behaviors of products, faulty data, opacity of algorithmic systems, software updates, and complex safety management and value chains.

Of the four major documents, the AI white paper has attracted the most interest from stakeholders. Many are concerned that the Commission’s regulatory vision will be burdensome, unworkable, and have the result of suppressing the success of European digital start-ups and the innovation ecosystem in Europe. Digital rights groups, for their part, have raised alarm over the Commission’s decision to not include a proposal for a three to five-year moratorium on facial recognition technology in the white paper.

The United States and Europe, which both face the same threat of growing authoritarian dominance of the internet by China, could build a valuable strategic partnership that would support the scaling of AI capabilities of Western economies.

Goals of the Commission as Set Out in the AI White Paper

In the white paper, the Commission supports a regulatory and investment-oriented approach to promote the uptake of and address the risks associated with certain uses of AI. To achieve these objectives, the Commission proposes an “Ecosystem of Excellence” and an “Ecosystem of Trust” that emphasize the importance of working with member states to invest in research to develop AI and improve EU competitiveness. The Commission lays out a future regulatory framework for the European Union when dealing with the developing technology of AI. This framework takes a multifaceted approach to the emerging technology, proposing ways to make AI accessible to small and medium-sized enterprises (SMEs) and discussing proper regulation regarding high-risk applications of technology and liability in instances of physical or material harm. Having missed the business-to-consumer tech boom, the Commission sees particular promise in the next wave of data generation and AI applications that will occur in business and industrial spaces.

Europe’s current and future sustainable economic growth and societal well-being increasingly draws on value created by data. AI is one of the most important applications of the data economy. Today most data are related to consumers and are stored and processed on central cloud-based infrastructure. By contrast a large share of tomorrow’s far more abundant data will come from industry, business, and the public sector, and will be stored on a variety of systems, notably on computing devices working at the edge of the network. This opens up new opportunities for Europe, which has a strong position in digitised industry and business-to-business applications, but a relatively weak position in consumer platforms.

— From On Artificial Intelligence white paper

Core Components of the Proposed Ecosystem of Excellence and Ecosystem of Trust

An Ecosystem of Excellence

The European Commission proposes an Ecosystem of Excellence framework to align AI R&D efforts across European, national, and regional levels and to mobilize resources across all points of the value chain. The framework aims to increase the adoption of AI, especially among SMEs and within the healthcare and transport services sectors. The framework also encourages investment in data access and computing infrastructure and increasing international cooperation based on EU standards to create a level playing field. The Ecosystem of Excellence offers seven key recommendations:

  1. Working with Member States

In December 2018, the European Commission presented a Coordinated Plan on Artificial Intelligence that was prepared with member states and offers over 70 actions to increase coordinated efforts in key areas, such as research, investment, market uptake, skills and talent, data, and international cooperation. While the plan is scheduled to run until 2027, the Commission has assessed the public consultation on the white paper and is set to propose a revision of the Coordinated Plan to be adopted by 2021.

To maximize investment in R&D, the revisions aim to attract over €20 billion of total AI-related investment in the European Union per year for the next decade. To encourage investment, the European Union will use resources from the Digital Europe Programme, Horizon Europe, and the European Structural and Investment Funds to support less developed and rural regions. The Commission will also assess how to promote AI solutions that incorporate societal and environmental well-being.

  1. Focusing the Efforts of the Research and Innovation Community

Through several efforts, the Commission will encourage alignment across European research centers on AI to increase synergies. First, centers will focus on sectors where Europe has competitive potential, including industry, health, transport, finance, agrifood value chains, energy/environment, forestry, earth observation, and space. Second, the Commission will create testing and experimentation sites to help develop and deploy AI applications while combining European, national, and private investments. There could possibly be a new legal instrument, but no additional information has been provided to date. Finally, the Commission has proposed funding to support world reference testing centers in Europe under the Digital Europe Programme; research and innovation actions of Horizon Europe could complement these efforts.

  1. Skills

The Commission is expected to present a reinforcement of the European Skills Agenda. Worker training funds will train sectoral regulators to better implement new AI rules. An updated Digital Education Action Plan will better use data and AI-based technologies to improve education and training systems for the digital economy, and the plan will increase public awareness of AI to prepare citizens for informed decisions that are increasingly AI-affected. Additionally, through the advanced skills pillar of the Digital Europe Programme, the Commission will create networks of leading universities and academic institutions that attract top professors and scientists and provide top master’s programs in AI in Europe. The revised Coordinate Plan will focus on developing the skills necessary to work in AI, especially increasing the number of women trained and employed in this sector.

  1. Focus on SMEs

The Commission proposes strengthening the Digital Innovation Hubs and the AI-on-demand platform to increase SMEs’ access to and use of AI. By working with member states, the Commission aims to ensure that at least one digital innovation hub per Member State has a high degree of specialization on AI. The Commission will also increase funding for SMEs’ AI efforts. For example, the Commission and the European Investment Fund launched a pilot scheme of €100 million in 2020 to invest in venture capital funds that target AI and blockchain technology activities. As of October 2020, around 60 percent of the pilot’s funding has been committed to back European tech equity funds, which have raised €700 million.

The Commission, through Horizon Europe, will set up a new public-private partnership (PPP) in AI, data, and robotics to increase coordination of AI R&D and collaboration with other PPPs in Horizon Europe. This partnership will also work with the testing centers and innovation hubs established in 2) and 4).

  1. Partnerships with the Private Sector

The Commission, through Horizon Europe, will set up a new public-private partnership (PPP) in AI, data, and robotics to increase coordination of AI R&D and collaboration with other PPPs in Horizon Europe. This partnership will also work with the testing centers and innovation hubs established in 2) and 4).

  1. Promoting the Adoption of AI by the Public Sector

The Commission will encourage private-public sector conversations while prioritizing healthcare, rural administrations, and public service operators, to develop an “Adopt AI programme” action plan that supports public sector adoption of AI systems.

  1. Securing Access to Data and Computing Infrastructures
Under the Digital Europe Programme, the Commission proposed over €4 billion in funding to support high-performance and quantum computing, such as edge computing and AI, data, and cloud infrastructure. The European Data Strategy further develops these priorities.

An Ecosystem of Trust Defined by Heavy Ex-Ante Regulation

The European Commission proposes an Ecosystem of Trust regulatory framework to protect fundamental rights, with a focus on high-risk AI systems. The Commission hopes that the proposed framework will give citizens the confidence to use AI applications while giving companies and public organizations the legal certainty to innovate using AI. Through increased confidence and legal certainty, the framework is expected to increase the speed in which AI is adopted.

The framework must align with other EU actions to promote innovation capacity and competitiveness. Because certain aspects of AI, such as its opacity, can make oversight difficult, the Commission should examine whether current legislation adequately addresses the risks of AI and is effectively enforceable, whether existing legislation should be adapted, or whether new legislation is needed.

In terms of AI-related risks, the Commission identifies several unaddressed situations that warrant improved legislation: 1) effective application and enforcement of existing EU and national legislation; 2) limitations of scope of existing EU legislation; 3) changing functionality of AI systems; 4) uncertainty in regards to the allocation of responsibilities between different economic operators in the supply chain; and 5) changes to the concept of safety.

Regarding the scope of regulation, the new framework should effectively achieve its goals without being excessively burdensome, especially applications that are not high-risk and for SMEs. To determine whether an application is high-risk, the Commission suggests looking at the sector in which the application is deployed and the use of the application. Sectors could include healthcare, transportation, energy, and parts of the public sector. The Commission identifies that not every use of AI in a high-risk sector is inherently high-risk. As a result, the Commission recommends looking within high-risk sectors at AI uses that are high-risk. This includes applications that produce a legal effect or impact the rights of an individual or company or use that may result in physical harm. Certain uses, regardless of the sector, should be considered high-risk, including when dealing with employment equality, recruitment, worker rights, remote biometric identification, and other “intrusive surveillance technologies.”

High-risk AI applications would be subject to requirements in the following areas: training data, data and record-keeping, information to be provided, robustness and accuracy, human oversight, and specific requirements for particular applications such as surveillance. The Commission foresees ex-ante conformity assessments for high-risk AI applications that operate in the European market regardless of their place of establishment. The Commission also acknowledges that not all requirements for AI will be suitable for prior conformity assessments and that as AI systems evolve, further assessments may be necessary. EU and national authorities would oversee enforcement.

Requirements for AI (High and Low Risk)

The white paper lists several requirements that apply to high-risk AI applications. First, training data used to train AI systems should meet applicable EU safety standards. For example, data sets should be sufficiently representative to avoid discrimination regarding gender, ethnicity, and other grounds of “prohibited” discrimination, and personal data should be protected. Second, high-risk AI applications often lack transparency, so data and record-keeping are needed to ensure tracing can occur if necessary. Records should describe the main characteristics of data sets used and how they were selected. When necessary, confidential information, such as trade secrets, should be protected. Third, information provided for transparency should include clear outlines of “capabilities and limitations,” such as the purpose of the systems and the expected conditions in which the system will properly function. It should also be transparent when citizens are interacting with a human or an AI system. Fourth, to ensure that high-risk applications are “robust and accurate,” ex-ante review and proper consideration of risks must be included. Throughout all stages of their lifecycles, AI systems should be replicable, reflect the same level of accuracy, successfully manage errors, and be able to withstand attacks. Fifth, human oversight must be incorporated throughout AI systems’ entire lifecycles. For example, output must be validated by a human, and humans should be able to deactivate a system while in operation.

Finally, the white paper states that the use of biometric identification, such as facial recognition, poses specific risks for fundamental rights, and EU data protection rules only permit processing biometric data to identify someone under very specific reasons. For example, under the GDPR, the main exemption is “reasons of substantial public interest.” Under the Law Enforcement Directive, there must be an authorization by the European Union or national law, coupled with appropriate safeguards. Because biometric identification is an exemption to EU law, its use must follow the Charter of Fundamental Rights of the European Union. The Commission, to address additional concerns of biometric identification, intends to initiate a debate about the process’s uses and safeguards. An earlier draft of the white paper suggested that companies using facial recognition technology in public spaces should be required to get citizens’ consent when personal legal rights are affected. The draft suggested a three to five-year ban on the use of facial recognition AI by private and public actors so that the implications of this technology can be better researched. However, the proposed temporary ban on facial recognition technology was not included in the final paper.

Operators of low-risk applications could commit to using voluntary labeling, in which the low-risk systems comply with legal requirements of high-risk systems. The operators would receive a quality label to signal that their systems are trustworthy.

In terms of the geographic scope of the legislation, the Commission believes that the requirements apply to all “relevant economic operators providing AI-enabled products or services in the European Union, regardless of whether they are established in the European Union or not.” To ensure compliance with these regulations, the Commission proposes an “objective, prior conformity assessment,” which could include procedures for testing, inspection, or certification and oversight of the algorithms and data sets used.

Existing AI Legislation and the GDPR

Concerns have emerged regarding how existing and proposed EU regulation will negatively impact the development of AI. It may be that Europe will see the need to update the GDPR to incorporate new insights into the essential link between data availability and the uptake of innovations in AI capabilities.

The GDPR reduces incentives for data sharing and limits the use of data. Article 5 of the GDPR restricts companies from collecting new data before they understand its potential value and from reusing existing data for novel purposes. With much innovation and value stemming from companies’ ability to combine datasets without necessarily predicting the future value of the merged set and from companies’ access to large datasets, these restrictions limit the ability of European companies to use data for any purposes other than what was initially established. In contrast, U.S. and Chinese companies have access to very large datasets and can use those datasets experimentally without initially knowing exactly the result and the potential insight. The February 2020 European Data Strategy addresses part of this limitation by committing to pool European data in key sectors and creating interoperable data spaces.

Regarding personal data flows, an October 2020 Clingendael report states that the GDPR prioritizes individual privacy at the expense of data-gathering by companies, which hampers innovation and corporate growth. As the EU-U.S. Privacy Shield came under scrutiny and was invalidated by European courts, the European Union finalized an adequacy agreement with Japan and began negotiations for an adequacy determination with South Korea. Regarding non-personal data flows, the European Union supports the global initiative toward the Data Free Flow with Trust (DFFT) to facilitate the free flow of non-personal data beyond EU borders. This would create a free zone for the flow of “medical, industrial, traffic and other most useful, non-personal, anonymous data.” The Clingendael report stresses the need for international cooperation through these agreements, as data localization requirements exact a cost on cross-border business. The report suggests the European Union should combine regulatory power with Japan to push for a comprehensive approach to global non-personal data regulation.

In addition to data restrictions, the GDPR restricts automated decision-making in two ways. First, Article 22 requires that data subjects have the right to have a human review of the automated decisionmaking process. Thus, each automated process must have a redundant and manual process for individuals who opt out of the automated one. This provision makes it more difficult for companies to scale up their AI efforts, as the costs of maintaining a manual process rise as AI becomes more complex.

Second, Articles 13–15 require transparency regarding the logic involved in automated decisions, and companies are required to be able to explain the automated decisionmaking process to individuals. However, it is not always possible to explain some AI systems, especially those involving neural networks (computing systems modeled off biological neural networks that constitute brains). These provisions could incentivize EU companies to stick with human decisionmaking even if the manual processes are less effective or efficient.

Legislative Changes Outlined in the AI White Paper

Given how quickly and constantly AI systems change, the Commission believes that existing legislation should be updated to address the following circumstances. First is the “effective application and enforcement of existing EU and national legislation,” such as by increasing transparency and clarifying liability laws. In its safety and liability report, the Commission explains the current limitations of the Product Liability Directive. The directive states that a manufacturer is liable for damage caused by a defective product. This raises uncertainty for AI systems because: 1) the allocation of responsibilities between developers and producers in the supply chain is unclear; 2) it may be difficult to prove that an AI-based system, such as an autonomous car, has a defect and that the damage is caused because of the alleged defect; 3) it is unclear whether and to what extent the Product Liability Directive applies to such AI-related defects; and 4) individuals harmed by AI-based systems may not have access to information or evidence necessary to build a court case and thus have access to fewer redress options.

Next is the “limitations of scope of existing EU legislation,” especially EU product safety legislation. For example, it is unclear whether stand-alone software is covered under EU product safety legislation and whether general EU safety legislation currently only applies to products and not services—even if those services are based on AI technology. Third is the “changing functionality of AI systems,” as AI software can modify products and systems during their lifecycles and introduce new risks. Fourth is “uncertainty as regards the allocation of responsibilities between different economic operators in the supply chain.” Fifth is “changes to the concept of safety,” as new risks associated with the use of AI could emerge at any point throughout an AI system’s lifecycle.

While member states are already considering various pieces of national legislation to address some of these challenges, the Commission fears that this would further fragment the single market. Instead, the Commission hopes to establish an EU-wide framework to allow companies to benefit from the single market. While these are laudable goals, the Commission does not include any plans in the white paper to address provisions within the GDPR that negatively affect AI development and innovation.

Stakeholder Comments on the AI White Paper

Responding to Commission plans to enact stricter rules for high-risk AI technologies, such as compliance tests and controls, 14 EU member states published a paper outlining their position urging the Commission to adopt a “soft law approach” consisting of “self-regulation, voluntary labeling and other voluntary practices, as well as a robust standardization process as a supplement to existing legislation that ensures that essential safety and security standards are met.”1 The countries recommend a European approach to AI regulation in which innovation and trustworthiness work together in a “coherent and borderless single AI market.”

For several reasons, AI giants such as Google and Facebook, as well as startups, tech companies, and associations, have raised concerns that meeting regulatory requirements considered in the AI white paper will stifle innovation and may require actions that breach EU data privacy rules.

First, technology corporations are waiting anxiously to see how the Commission will define “high-risk” AI, as that will have a foundational impact on how AI regulation is implemented. A chief concern is that the Commission’s definition of “high-risk” AI and its approach to the concept is too broad and would result in one-size-fits-all regulation for all AI applications in “high-risk” sectors, regardless of differences between the sectors and the type of AI application. Instead, Google, Facebook, and Digital Future for Europe, which represents associations, tech companies, and startups from the Digital Nine,2 propose a sector, technology, or application-specific regulatory approach that takes into account unique risks posed by individual sectors as well as the application itself. For example, risks posed by AI applications in the healthcare sector are much different than those in the transportation sector. One-size-fits-all regulation for AI applications within sectors would also be suboptimal. An AI chatbot developed for a healthcare company to better understand patient needs presents a different level and type of risk than an AI application involved in medical decision-making. 

Although expressing agreement with several aspects of the white paper, IBM, one of the largest technology employers in the European Union, argues that existing sector-specific governance structures would be better suited to implement regulation to avoid unnecessarily impeding AI development. For example, the Commission’s propositions on human oversight requirements may be inappropriately restrictive in the context of automated driving, since “it is impossible to oversee every single decision taken by an automated car, due to most decisions being taken in real time.”

Stakeholders have also raised concerns over vague exceptions included in the white paper that could make it difficult to determine whether an AI application is subject to “high-risk” rules. The inclusion of AI applications that could cause “immaterial damage” as a factor in determining risk was characterized as nebulous by many stakeholders. The white paper also suggests that there may be “exceptional circumstances” where an AI application could be considered “high-risk” regardless of its sector. Booking.com, an online travel agency headquartered in the Netherlands, has questioned the white paper’s unspecific notion that “applications affecting consumer rights” could also be required to apply “high-risk” rules. These exceptions and ambiguities inject an uncertainty into which AI applications will be considered “high-risk” and increases expectations that the Commission will adopt an overly broad view of “high-risk” AI applications, subjecting applications to onerous regulations that are not justified. In comments to the white paper, the European Digital SME Alliance, a network of associations representing more than 20,000 European small and medium ICT enterprises, added, “[s]mall companies are the first to suffer from rules that are up to interpretation.”

Digital Future for Europe maintains that the “risk assessment process would stifle many AI innovations before they have developed,” and through ex-ante regulation “the EU threatens to skew the whole development of European AI.” Additionally, IBM has cast doubt upon the white paper’s proposal of an explicit and exhaustive list of sectors in which “high-risk” applications could emerge, due to the rapid evolution of AI technology and its diverse applications.

Second, companies have raised concerns regarding the requirement that AI developers would need to retrain AI systems in the European Union to meet data set requirements. Facebook and Google note that restricting data available to train AI to just EU data is likely to reduce the quality of the AI product and contribute to biases in the systems. Digital Future for Europe suggests the Commission focus on increasing the pool of data acceptable to regulators and available to AI developers in order to improve AI applications and encourage innovation. IBM opposes input-specific requirements, arguing instead that “high-risk” applications should be required to ensure a specific outcome, such as the absence of discrimination. The Commission’s approach to training data either wrongly assumes AI applications employ a static data set or raises the prospect that AI systems would need to be re-audited on a regular basis. That would drastically slow the ability to update and improve AI tools. This is particularly concerning if an AI application is producing undesirable or suboptimal outcomes based on a previously audited data set. Facebook has pointed out that it is unclear what data needs to be “EU data” and how the service being provided by an AI application changes that consideration. Google, as well as the European Association of Automotive Suppliers (CLEPA), worry that it is difficult to separate out the provenance of some parts of training datasets in certain fields, which often rely on third-party and open-source data. Leaving out these data sets in training could seriously hurt the quality of AI systems subsequently released in the European Union.

Third, Google and Facebook have raised concerns that compliance with rules for “high-risk” could run counter to EU privacy rules, including GDPR. Facebook worries that the proposed requirement to store training data sets could “create a direct tension” with policies to protect users’ data, such as those related to data minimization and data retention. The social media platform also worries that the proposed requirement could prohibit federated learning and other AI approaches intended to protect privacy.3 In Facebook’s view, this requirement could undermine the potential effectiveness of federated learning in protecting users’ privacy.

Requirements around algorithmic fairness and the need to share source code also appear to be in tension with EU privacy rules. Google believes that the requirement to ensure that data sets are “sufficiently representative” contrasts with GDPR obligations: “developers should not be able to access attributes such as ethnicity and therefore could not test for ethic [sic] representation in a dataset.” Google stresses that “sufficiently representative” should be more clearly defined.

As an alternative to ex-ante conformity assessments that can be burdensome, unpredictable, and could expose companies to breaches in privacy rules, Facebook and Google propose benchmark tests tailored to specific high-risk applications to determine whether or not AI will behave in a way that is expected. This approach would hold industry players accountable for mitigating bias in AI, allow internal testing of AI systems, and support a self-certification process. Digital Future for Europe suggests an overall regulatory approach that encourages responsible adoption of AI before introducing onerous regulation. To do so, European governments should open public data sets, provide for data interoperability between government data sets, refit existing European legislation to account for AI, and make sure that any new AI rules are agile and flexible. Google proposes a “large central dataset” that developers could access to protect against developers attempting to overfit models to accord with EU rules.

The Uncertain Future of Europe’s Relative Position in the Global Digital Economy

In addition to practical gains for improving human health and the environment and a myriad of other applications, the uptake of AI technologies globally will offer huge benefits in terms of increased productivity and economic growth. AI can also contribute to enhanced income — especially for countries that can establish standards that promote the steady adoption of AI technologies, in line with the regulatory environments in the United States and Asia that are aimed at facilitating this.

According to the McKinsey Global Institute (MGI), Europe has a strong foundation for technological innovation, including an increasing number of thriving digital hubs and world-class research institutions, the potential for the largest single digital market in the world, and an ever-growing pool of skilled professional developers.

It is encouraging that the pace of investment in European technology companies has increased for the past five years. Total investment in 2015 was just $15.3 billion. By 2019, 174 European technology companies had a valuation of over $1 billion compared to just 13 companies in 2010. Of those 174 companies, 99 are backed by venture capital. Fintech, enterprise software, health, energy, transport, food, and marketing—sectors that save enterprise software, European companies have long been globally competitive in—received the most investment in 2019. Clearly investors see significant promise in European technology companies. This is the good news.

What is striking is the degree to which European firms have fallen steadily behind international competitors in the overall global digital services market during the last 15 years. While nurturing a vibrant startup sector, Europe lags behind Asia and the United States in terms of overall investment in technology. In 2019, European technology startups received $34.3 billion in capital investment, compared to $62.5 billion invested in Asian technology companies and $116.7 billion invested in U.S. technology companies. In 2020, with a market of over 500 million consumers and a sophisticated workforce, Europe continued to lack homegrown global digital platforms of significant size, in part due to the fragmentation of the EU Digital Single Market. Today Europe is home to only three technology companies in the Fortune Global 500, compared to 12 from the United States, five from China, and six from Taiwan and Japan respectively.4

In addition to practical gains for improving human health and the environment and a myriad of other applications, the uptake of AI technologies globally will offer huge benefits in terms of increased productivity and economic growth.

Specifically pertaining to AI, Europe obtained 11 percent of global venture capital and corporate funding in 2016, with 50 percent going to the United States and the rest going mostly to China. China now attracts nearly 50 percent of global investment in AI startups while the United States attracts roughly 38 percent. Only a small subset of European companies have meaningfully adopted and absorbed AI technology into their businesses, with the majority of companies only focused on a narrow set of AI and automation technologies for limited functions. One reason is because European AI startups lack scalability and success. For example, in 2017, only four European companies were in the top 100 global AI startups,5 while three-quarters were in the United States. Another reason is that EU AI initiatives are fragmented and lack sufficient investment.

If Europe is to make progress toward its goal of being a global leader in the digital economy on the scale of the United States and China, Europe must focus on addressing its digital gap, or its lag in the supply, adoption, and diffusion of digital technologies compared to the United States and China. This “take-up capability,” which is preventing Europe from leveraging the full potential of AI across its economy, should be kept squarely in mind as the Commission drafts its AI regulations. If Europe successfully overcomes its challenges and fully scales up its AI innovation, McKinsey estimates that Europe could add €2.7 trillion of economic output to its economy by 2030.

Europe’s overall relative decline in innovative digital services can be characterized by several trends. First, European startups have struggled to scale up into major companies, especially in the digital and health sectors. Europe’s success in turning startups into “unicorns” (private startups valued at $1 billion or more) has occurred at about half the rate of the United States. Investors and capital are scarce in Europe relative to Asia and the United States. Second, with digital technologies being the new driver of performance for global firms, Europe has become less central to international digital trade flows due to the virtual absence of European companies in the digital platform space. Third, Europe has a declining share of global R&D—particularly in the digital sector. Only two-thirds of digital potential was reached by European firms compared to their U.S. counterparts; in other words, European firms are far less digitized overall than U.S. firms.6R&D spending by European software and computer services firms was roughly 8 percent of the global total, compared to 11 percent for Chinese companies and 77 percent for U.S. companies.

Research data suggests that Europe’s tech ecosystem has seen limited recent success despite increasing regulation from the Commission and that new heavy-handed regulation will not make Europe a more attractive investment location. Most founders, employees in the technology sector, and investors in Europe see startups as most impacted by regulatory burdens. Only 20 percent of European technology company founders believe European policymakers absorb concerns and views of startups, and over 40 percent believe regulation makes it more difficult to found a company in Europe and scale it up. Just 32 percent believe that the “direction” of European technology regulation is positive for the European technology ecosystem.

Benefits of AI

The benefits of AI adoption for the European industrial economy include: 1) the automation of daily operations; 2) a reduction in human error impact; 3) an increase in productivity and work efficiency; 4) the ability to better identify sales opportunities; and 5) improved customer support.

AI uptake in highly industrialized countries like Germany will benefit a variety of sectors, according to a November 2019 Global Manufacturing and Industrialization Summit report. In manufacturing, AI can play a key role in demand prediction, predictive maintenance, bottleneck identification, and quality control. In customer support, AI provides quick ways of solving problems, discovering customer issues, offering 24/7 support, and developing a personalized experience within an industry. Within customer communication in the shopping industry, businesses can personalize marketing packages to learn and anticipate what is appealing to customers. AI allows hyper-personalization in marketing. In software development, AI can speed up error detection and improve software performance. Financial and security agencies, governmental agencies, and military branches would first benefit from the recognition of images allowed by AI. Finally, computer vision can enable machines to mark and store objects, benefiting a variety of industries. For example, in healthcare, computer vision can help doctors distinguish between illnesses and detect medical conditions such as cancer.

For Europe to obtain these benefits, several risks associated with AI adoption should be acknowledged. First, regarding complexity barriers, AI-developed self-learning algorithms could become so intricate that a developer no longer understands the algorithm. Second, regarding narrow AI technology development data sets, the lack of organized data sets and limited data size decreases the use of AI cases and increases software errors. Third, as in the United States and China, skills gaps exist. The percentage of people employed in machine learning, data, and informatics is highly disproportionate compared to the overall population; according to a European Data Market study in 2015, there were 396,000 unfilled data worker jobs in the European Union.

Role of Data in the Uptake of AI

Recently, more companies have started focusing on big data and AI and are seeing tangible benefits, but that rate of investment in AI is slowing down. NewVantage Partners released its Big Data and AI Executive Survey 2020, having canvassed more than 70 leading global firms. The survey found that spending on big data and AI across participants continues to increase, but less rapidly. Over 60 percent of the surveyed companies are investing $50 million or more in these technologies, up from just 39.7 percent in 2019. However, the number of firms increasing their rate of investment has decreased. Only 51.9 percent of firms in 2020 are accelerating their rate of investment, compared to 91.6 percent of firms in 2019. The report states that big data and AI investments are being driven by “offensive” factors, such as transformation, innovation, and competitive advantage, to propel revenue growth and business advantage. Investment decisions made for “defensive” objectives, such as cost-savings and regulation, are a small percentage. The survey also found that firms are “realizing measurable results” from their Big Data and AI investments, as firms have established business use cases that are demonstrating value. It is unclear from the study what these results are.

The Importance of Big Data

Free data flows will be essential to the successful development of AI in Europe on par with the AI digital transformation in the United States and China. Monica Rogati, a premier data scientist, coined the “AI Hierarchy of Needs,” a pyramid-shaped framework for understanding the development of AI. At the bottom of the pyramid is data collection, followed by essential infrastructure to store data, access and analyze it, and explore and transform it. The framework illustrated that, without access to the right data and the tools to use that data, AI-related innovation cannot emerge.

According to MIT Sloan Management Review, the “convergence of big data with AI has emerged as the single most important development that is shaping the future of how firms drive business value from their data and analytics capabilities.” Quicker access to large volumes of data allows data scientists to access real, detailed, and nuanced data rather than relying on representative data samples. This allows companies to be driven by “data first” rather than hypotheses. Data is the fuel that powers AI, and the abundance of data collected supplies AI programs with the necessary examples to identify differences, increase pattern recognition capabilities, and see the fine details within patterns. AI programs learn through access to more data. In its national AI strategy, the federal government of Germany acknowledged the importance of access to data with plans to increase the amount of available, high-quality data.

Further reiterating the importance of data, the Organization for Economic Cooperation and Development (OECD) states that “the highest-value uses of AI often combine diverse data types, such as audio, text, and video. In many uses, training data must be refreshed monthly or even daily . . . Without large volumes of training data, many AI models are inaccurate.” Data scientists usually cite data quality as the main barrier to successful AI implantation, as industrial data can be incorrectly formatted, incomplete, inconsistent, or lack metadata. Data scientists will often spend 80 percent of their time cleaning, shaping, and labeling data before AI systems can be put to work.

The OECD recommends that government agencies coordinate and steward data-sharing agreements for AI purposes, as “all data holders would benefit from data sharing” in some cases. For example, AI-based prediction of potentially costly accidents on oil rigs would be improved if this statistically small number of data holders were to share their data. The report states that restricting data flows could lead to lost trade and investment opportunities, higher costs of cloud and other information technology services, and lower economic productivity and gross domestic product growth. Restricting such flows either directly or indirectly can raise firms’ costs and increase the complexity of doing business, especially for SMEs.

The Negative Effects of Data Localization Measures

Recent estimates from McKinsey find that cross-border data flows increased world GDP by 10.1 percent from the mid-2000s to the mid-2010s. Despite this, some countries still pursue data localization policies. These countries not only forego the benefits gained from accessing data but also absorb additional negative costs, such as increased prices, decreased competitiveness, and lower productivity, relative to their non-protectionist counterparts.

A 2017 ITIF report finds that such data policies have negative firm-level and country-level economic impacts on the protectionist countries. At the firm-level, data flow barriers reduce firms’ competitiveness, and companies will be forced to spend additional money on IT services, data-storage services, duplicative services because data cannot be easily transferred, and compliance activities such as hiring a data-protection officer. As these data flow barriers affect data processing and internet services, the impacts are widespread across an economy. A coinciding study found that if Brazil enacted a 2014 proposed data localization plan, companies would have been forced to pay around 54 percent more for cloud-computing services, on average.

At the country-level, enacting barriers to data flows cuts off domestic companies’ exposure to and benefits from ideas, research, technology, and best practices that accompany data flows and data-driven innovation. As a result, domestic firms in digital-protectionist countries are less competitive and innovative than foreign, non-restricted firms operating in global markets. Domestic firms also face delays and higher costs when developing and innovating goods, as protectionist policies may force companies to use second choice research partners.

Three additional reports corroborate the ITIF’s conclusion that data localization has negative economic impacts. First, a 2016 Center for International Governance Innovation (CIGI) and Chatham House study finds that restrictive data regulations increase prices and decrease productivity across a range of economies.7 Data localization and other common data flow barriers resulted in decreased total factor productivity (TFP), and the authors simulated that the lost TFP in downstream sectors, especially in the services sector, would reduce GDP by 0.10 percent in Brazil, 0.55 percent for China, 0.48 percent in the European Union, and 0.58 percent in South Korea in “the medium- to long-term.”

Second, a 2014 European Center for International Political Economy (ECIP) study estimates the economic costs related to proposed or enacted data localization requirements and related data privacy and security laws in Brazil, China, the European Union, India, Indonesia, South Korea, and Vietnam. The study found that: 1) the impact of proposed or enacted data restrictions on GDP is substantial in all seven countries;8 2) if these countries also introduced economy-wide data localization requirements, GDP losses would be even higher;9 3) the impact on domestic investments is considerable,10 and if these countries also introduced economy-wide data localization, the impact increases for most countries;11 and 4) if these countries enacted economy-wide data localization, the study estimates that higher prices and displaced domestic demand will lead to consumer welfare losses.12

Third, looking at Europe specifically, a 2016 ECIPE study identified 22 data localization measures that restricted the transfer of data between member states, including targeting company records, accounting data, banking, telecommunications, gambling, and government data, and identified at least 35 restrictions on data usage that could indirectly result in data localization. Estimating the difference in economic impact between the European Union removing data localization efforts and scaling them up across member states, the study finds that data localization reduces productivity and the losses outweigh potential marginal gains within the domestic ICT sector.13 The removal of existing data localization policies is estimated to annually increase the GDP of individual member states’ economies by 0.05 percent in the United Kingdom and Sweden, 0.06 percent in Finland, 0.07 percent in Germany, 0.18 percent in Belgium, and 1.1 percent in Luxembourg. The worst-case scenario, in which all cross-border data flows within the European Union are restricted, estimates that full data localization policies would remove 0.4 percent from the EU economy each year, with the impact varying in individual countries.

Emerging AI Cooperation Efforts

Benefits of Transatlantic Cooperation

It will be important for the Commission to recognize that transatlantic cooperation can provide Europe with the means necessary to achieve its AI innovation and adoption goals in several key areas. First, considering the United States and the European Union together make up roughly 50 percent of global GDP and around 770 million people, establishing agreed-upon standards for making public data sets available can significantly increase European businesses’ and researchers’ access to more data. It is important that this cooperation includes clarifying where GDPR allows data sets to be shared, an area of uncertainty that carries many risks for U.S.companies. Second, reviewing laws that govern copying data sets would increase clarity about when a company is permitted to retrain and share data.

Third, collaboration will promote the scaling up of skills and education, as both the United States and the European Union can tap into a larger pool of talent, resources, academia, and institutions. While some U.S.-EU AI-related research currently exists, the projects are relatively “ad hoc and materialize within existing scientific and technological research agreements and roadmaps.” For example, the United States remained the leading non-EU participant in Horizon 2020, the European Union’s six-year research framework to implement and fund high-level EU policy initiatives, but there is significant room for increased collaboration. Right now, U.S. collaborative links with Horizon 2020 projects are only found in 2 percent of AI-related projects, 4 percent of machine learning-related projects, and 12 percent of deep learning projects.

AI features “winner-takes-most” dynamics in many industries, with companies that adopt widespread AI technology are often the quickest to reap the biggest benefits. The 10 percent of European companies that are the most extensive users of AI to date are likely to grow three times faster than the average firm over the next 15 years. MGI observes that this “winner-takes-most” dynamic applies across many countries.

It will be important for the Commission to recognize that transatlantic cooperation can provide Europe with the means necessary to achieve its AI innovation and adoption goals in several key areas.

In the white paper, the Commission expresses the intention to cooperate with like-minded countries, but also with “global players,” on AI, based on an approach grounded in EU rules and values (e.g., supporting upward regulatory convergence, accessing key resources including data, and creating a level playing field). The Commission states it will closely monitor the policies of third countries that limit data flows and will address undue restrictions in bilateral trade negotiations and through action in the World Trade Organization (WTO).

Possible Approaches to Transatlantic Cooperation

A reset in U.S.-European relations as envisioned by President von der Leyen offers the occasion, if not a realistic opportunity, to pursue the European Union’s suggestion of setting up the U.S.-EU Trade and Technology Council as a venue to discuss new areas of cooperation. The Commission lays out the even more daunting goal of negotiating a robust U.S.- EU agreement on AI. Guided by the wisdom that a journey of 1000 miles begins with the first step, it is nevertheless difficult to know exactly where to begin the negotiating process of reaching more compatiability between emerging initiatives in Europe and the United States in the area of AI regulation. What is known is that this process will only get increasingly challenging as policies and regulatory choices on both sides of the Atlantic become more detailed and entrenched. Vehicle safety standards, which have developed independently on both sides of the Atlantic despite having a common goal, offer a cautionary example of starkly contrasting regulatory frameworks that add significant costs and frictions for auto producers selling in the two markets.

Informative as to possible avenues of collaboration between the United States and Europe, the United States and the United Kingdom recently signed a bilateral declaration to establish a government-to-government dialogue on AI and further cooperation on AI R&D. The bilateral declaration builds on the 2017 U.S.-UK Science and Technology Agreement and is aimed at setting up new public/private partnerships with nationals from both countries to drive advances in a common AI R&D ecosystem. Under this bilateral understanding, the United States and the United Kingdom will focus on “priorities for future cooperation, particularly in R&D areas where each partner shares strong common interests (e.g., interdisciplinary research and intelligent systems) and brings complementary challenges, regulatory or cultural considerations, or expertise to the partnerships.” The two countries have agreed to join forces on solving challenging technical issues, and protecting against efforts to adopt and apply these technologies in the service of authoritarianism and repression.” Injecting similar collaborative initiatives on R&D related to AI projects into the U.S.-Europe bilateral relationship, similar to what is outlined for the United States and the United Kingdom, may be a way to engender more trust and a mutual stake in the success of AI innovation in Europe.

Multilateral Cooperation

Transatlantic cooperation on AI regulation, if it occurs, will be in the broader context of tentative multilateral efforts to explore new disciplines for digital trade in the WTO and OECD, with varying degrees of connection to AI. On January 25, 2019, 76 members of the WTO announced that they would commence WTO negotiations on trade-related aspects of electronic commerce. The declaration does not mention AI but aims “for the multilateralization of new WTO disciplines and commitments relating to e-commerce.” Negotiators from many of the 76 countries are considering commitments governing free data flows, which, as discussed above, impact the ability to pursue innovative solutions through AI applications. WTO plenary meetings have occurred in October and November 2020, with Australia, Japan, and Singapore serving as key leaders. Ambassador Kazauyuki Yamazaki of Japan, during the November 17 meeting, stated that a “clean text is within reach in some leading areas such as online consumer protection, electronic signatures and authentication, and spam and that the initiative could show good progress next month.” Even as proposed data provisions in the text are controversial, Ambassador Hung Seng Tan of Singapore believes the consolidated text will help advance e-commerce negotiations in 2021.

The OECD is collaborating with G20 countries who are taking steps to advance trustworthy and democratic AI principles. In 2019, the G20 trade ministers and digital economy ministers met and discussed ways to implement productive digital policies. The ministers expressed a commitment to a "human-centered approach to artificial intelligence, guided by the G20 AI Principles drawn from the OECD Recommendation on AI.” A 2020 OECD report finds that G20 countries have recently engaged in a range of AI initiatives that address multiple G20 AI Principles at once, with many of the initiatives focused on AI R&D.

In June 2020, the OECD announced it will house the Secretariat of the Global Partnership on Artificial Intelligence (GPAI), a group of stakeholder experts from 16 countries originally spearheaded by France and Canada. The first annual GPAI Multi-stakeholder Experts Group Plenary was hosted virtually by Canada on December 3–4, 2020. A press release from the French foreign ministry following this summit described the GPAI:

The purpose of this international, multi-stakeholder initiative is to promote a responsible use of AI, with due regard for human rights, inclusion, diversity, innovation, and economic growth. To achieve these goals, member countries are working, through GPAI, on bridging the gap between theory and practice, supporting research activities, and the application of reliable artificial intelligence. Democratic values, human rights, responsibility, transparency, diversity, inclusion, and protection of privacy are at the heart of GPAI’s values and work.

Because of heightened political interest in the United States and Europe aimed at addressing the Covid-19 pandemic through technological innovation and other approaches, transatlantic cooperation on health applications for AI could also offer a possible starting point for sharing regulatory principles during emergency health situations and possible best practices. An urgent short-term goal for GPAI experts will be to investigate how to leverage AI for a better response and recovery from Covid-19. The OECD has already identified areas where AI technologies and tools can assist Covid-19 responses and key recommendations for governments and stakeholders. The OECD anticipates cross-fertilization between OECD and GPAI to work toward trustworthy AI.

Other Possible Bilateral Initiatives

It will be important for the United States and Europe to explore an understanding and description of which AI applications should be defined as “high risk,” since this concept pervades European thinking on AI regulation and, so far, has not been clearly defined. A joint understanding, even if subject to different frameworks on either side of the Atlantic, would provide more clarity for companies that want to operate both in the United States and the European Union and would lay the foundation for cooperation on regulatory obligations—a joint approach to regulating “high-risk” AI once both Europe and the United States have resolved what “high-risk” means. 

It is not known whether the European Union will adopt a scheme that requires AI applications to be trained exclusively with GDPR compliant data in order to gain entry into the European market. If the European Union does adopt this restriction, it may be necessary for the United States to pursue an agreement that would permit U.S. companies to train their AI applications on data that is not exclusively EU data, so long as the application is certified to meet certain benchmark requirements when tested, such as non-bias and non-discrimination. The sort of “equivalency” agreement, in the vein of a framework of mutual recognition, if achievable, could help shore up a more predictable environment for businesses who are active in AI and who rely on transatlantic data flows.

The U.S. government should work in parallel to develop industry-standard benchmark tests to determine the risk of bias, discrimination, and other potential hazards of AI. This approach could function as an alternative to the European Union’s ex-ante conformity assessment approach and would seem to be in sync with Europe’s declared intent to open up large data sets of non-personal data to widespread public use. Perhaps a better bilateral understanding of different approaches to AI regulation would help address the threat that Europe will move to establish requirements that U.S. firms locate data in Europe. Open data sets could be used by U.S. companies, along with qualifying non-EU data, to train high-risk AI applications to meet benchmark tests agreed to by U.S. and EU regulators. 

It will be important to analyze the type of European regulatory authority that will be set up to administer ex-ante assessments and what access this agency would have to proprietary data and algorithms. The United States and European Union should establish a scheme in which U.S. companies can self-certify as meeting EU high-risk AI regulatory requirements—just as companies used to be able to self-certify under the now defunct Privacy Shield. A second-best option would be to ensure that ex-ante conformity assessments are carried out by EU regulators who operate with strict discretion. Contracting this responsibility to third parties that may have a commercial interest in the technology they are reviewing would be risky and difficult for many companies to accept.

The model should be used to negotiate new Free Trade Agreements (FTAS), such as the U.S.-UK FTA, and be updated to be consistent with new developments in the United States and UK regulatory regimes and agreed areas of bilateral scientific collaboration under the new bilateral technology agreement mentioned above.


The European Commission is scheduled to release draft AI legislation during the first quarter of 2021. By that time, the United States will have had a chance to assess Europe’s new Digital Services Act (DSA) and the Digital Market Act (DMA) proposals, multifaceted pieces of draft legislation aimed squarely at reigning in the business models of large online platforms—most of which are U.S. based. Unless the Commission moves away from its current approach of heavy-handed regulation, it is probably safe to assume these regulatory moves will inject further friction in the U.S.-EU tech relationship and possibly threaten President von der Leyan’s vision of a reset in the transatlantic relationship.

New administrations are a time for new beginnings, but there is a distressingly long list of persistently intractable transatlantic trade disputes that includes Boeing-Airbus, digital services taxes, U.S. national security tariffs on steel and aluminum, and the EU courts’ invalidation of the Privacy Shield framework for data flows. Without progress on some of these long-standing trade issues, it is uncertain whether U.S. government leaders would see it worthwhile to pursue another frustrating initiative in the form of establishing a U.S.-EU Technology Council. That being said, bilateral discussions on U.S. and EU AI capabilities, aimed at creating global convergence around emerging U.S.-EU AI regulations and standards, offer the promise of helping the United States and the European Union win the geopolitical competition between China’s illiberal model of AI regulation and democratic states’ values-based model. Collaberation in these areas could help both Europe and the United States grow their economies through innovation. Europe has offered an outstretched hand seemingly in the direction of convergence, but it remains to be seen whether the laudable gesture will be accompanied with sufficient negotiating flexibility to interest weary negotiators.

Meredith Broadbent is a senior adviser (non-resident) with the Scholl Chair in International Business at the Center for Strategic and International Studies in Washington, D.C.

The author would like to thank William Reinsch, Jack Caporal, Jasmine Lim, Seán Arrieta-Kenna, and Will O’Neil for constructive comments and research support, and all participants in CSIS’ roundtable on Artificial Intelligence.

This report is made possible through the generous support of the Computer & Communications Industry Association (CCIA).

This report is produced by the Center for Strategic and International Studies (CSIS), a private, tax-exempt institution focusing on international public policy issues. Its research is nonpartisan and nonproprietary. CSIS does not take specific policy positions. Accordingly, all views, positions, and conclusions expressed in this publication should be understood to be solely those of the author(s).

© 2021 by the Center for Strategic and International Studies. All rights reserved.

Please consult the PDF for references.

Meredith Broadbent
Senior Adviser (Non-resident), Scholl Chair in International Business