France’s AI Action Summit

Photo: LUDOVIC MARIN/AFP via Getty Images
The Paris AI Action Summit took place on February 10–11, following the UK AI Safety Summit (November 1–2, 2023) and the AI Seoul Summit (May 21–22, 2024). It was the most anticipated international event of the year on artificial intelligence.
Q1: How is the AI Action Summit different from the UK and Seoul summits?
A1: The AI Action Summit—which was hosted by France in Paris on February 10 and 11—marks a significant narrative shift compared to its two predecessors: the UK AI Safety Summit and the AI Seoul Summit, which took place in November 2023 and May 2024, respectively. The most significant shift was already clear from the name change: moving away from an exclusive focus on safety to one on “action,” which is to say, AI adoption. The perspective that French President Emmanuel Macron wanted to convey was one of optimism and opportunity around AI rather than one of safety and risk management. The overarching themes of the summit were innovation and investments around AI, its impacts on culture and creativity, environmental sustainability, and the need to make AI inclusive and accessible to all, including in the Global South.
The French government also decided to “go big.” Whereas the UK summit was capped at 100 individuals representing 30 countries—a smaller group of countries with more advanced AI capabilities—the French AI summit included more than 1,000 participants, including several dozen heads of state, from more than 100 countries. This made reaching a consensus much more difficult but was helpful for France’s goal of attracting interest and investment in its AI sector. France ultimately announced that it had secured commitments to invest more than €109 billion over the next few years.
Q2: What was agreed upon at the Paris AI Action Summit, and who signed the final declaration?
A2: A total of 61 countries and regional blocs signed the final declaration “on inclusive and sustainable artificial intelligence for people and the planet.” However, the news was dominated by those who did not sign, specifically the United States and the United Kingdom. The two countries decided not to sign for very different reasons. A UK spokesperson told the Financial Times that the United Kingdom’s refusal was because the statement “didn’t provide enough practical clarity on global governance, nor sufficiently address harder questions around national security.”
Conversely, the United States reportedly chafed at the declaration’s excessive focus on multilateralism and the references to inclusivity, diversity, and environmental challenges. Notably, the United States’ main AI rival—China—did sign the declaration.
Overall, this declaration was the most inclusive compared to the previous summits, with Bletchley’s having 29 signatories and Seoul’s having only 11 (including the European Union), which were mostly from the West. The declaration is a very short statement about the need to promote AI accessibility to reduce the digital divide, to ensure a positive impact on labor markets, to make AI sustainable, and to enhance international cooperation on AI governance. The concrete actions proposed to secure these objectives are comparatively few. In particular, the signatories agreed to launch a public interest AI platform and incubator to address the digital divide, step up the engagement on the environmental impact of AI—without substantial commitments—and create a network of observatories to study the impact of AI on job markets. The next AI summit is currently anticipated to be hosted by India. However, given the lack of tangible results, some European government officials have told CSIS that they are considering asking not to have any more Summits, and rather focus on existing multilateral commitments in the context of the UN Global Digital Compact, the Global Partnership on AI, and the Independent International Scientific Panel on AI.
Q3: How did stakeholders respond?
A3: Reactions were unsurprisingly mixed. The official activities of the summit were accompanied by around 100 side events spanning around 10 days, some of which were organized by the French government and others independently of it. The whole community of civil society, academia, industry, standardization bodies, AI safety institutes, and international organizations are incredibly active: making connections, launching initiatives, and discussing the current and future challenges and opportunities of the AI ecosystem. While industry and investors were eager to showcase their innovations and find business opportunities, the rest of the stakeholders seemed to be having an entirely separate, although very lively, summit. Some of these side events focused on the novel risks posed by frontier AI, with a particular focus on loss of human control.
This is the main reason that many civil society organizations who were enthusiastic after the UK AI safety summit expressed their extreme disappointment at the final French summit declaration and the summit’s focus on advancing the AI race. In the year since the UK AI safety summit, the capabilities of frontier AI models have increased enormously, and so have the risks and the need for guardrails. Instead, the leaders preferred to talk about action, the need to accelerate AI progress, and an optimistic vision of innovation. The Future of Life Institute even went as far as calling on countries not to sign the declaration, deeming it extremely vague and lacking ambition, and even the CEO of Anthropic Dario Amodei called the summit a “missed opportunity.”
Q4: What does the summit mean for France?
A4: The summit was characterized by major power plays, a sign that AI has increasingly become a source of geopolitical competition instead of a reason for international cooperation and multilateral governance. France’s President Emmanuel Macron clearly used the summit as an opportunity to portray France as the AI leader in Europe and the star host of the summit.
In his speech at the Grand Palais on February 10, Macron emphasized his positive view of AI as a force of progress for humanity and France’s intention to invest massively in it. Macron outlined his own vision of the European model for AI: one that protects intellectual property, enhances creativity, and protects children and teenagers. Notably, he chose these to represent the European model instead of fundamental rights, safety, trustworthiness, or human-centric AI. These were the three key concepts underpinning the European Union’s approach ever since the European Commission’s 2020 white paper on AI in 2020 that led to the EU AI Act and its related policy initiatives. In talking about AI’s energy needs, he emphasized how well-placed France is to face this challenge, thanks to its nuclear power plants. “We don’t need to ‘drill baby, drill,’” he said, in an explicit reference to President Trump’s inaugural speech, “here we just ‘plug baby, plug!’”
Q5: What did Vice President JD Vance say?
A5: The Trump administration, represented at the summit by Vice President JD Vance, clearly agrees with Macron’s proposed shifts to deemphasize rights and risks while emphasizing opportunity, limited regulation, and investments in AI and related sectors such as energy. Vance’s speech at the leaders’ summit—which marked his first foreign address—presented the United States as committed to preserving its leading position in AI willing to work with partners (but not with authoritarian regimes that “weaponize AI for censorship, surveillance and propaganda,” hinting at China) but on its own terms. In this context, Vance underlined that hasty legislation could kill or block AI, calling on Europe to look at this revolution with optimism and not with regulation.
More specifically, Vance pointed the finger at Europe’s Digital Services Act for forcing U.S. companies towards censorship and policing “so-called misinformation,” while he also criticized the European Union’s General Data Protection Regulation as damaging small U.S. and European companies, as it makes placing their products on the EU market so complicated that they are discouraged from doing so. In other words, he made it clear that the United States is willing to work with international partners but will forcefully resist any attempts to put blockers in its path to harnessing every possible opportunity from the AI revolution.
Q6: What does the summit mean for Europe?
A6: President of the European Commission Ursula Von Der Leyen’s speech on the European Union’s behalf came right after Vance’s, and he did not stay to hear her remarks. The move clearly symbolized how at odds the European Union appears with the United States on AI policy.
In some ways, this was odd since Von Der Leyen also chose to focus on action and optimism. She affirmed the European Union’s will to have a clear place in the global AI race and recalled that since the AI revolution is still at the beginning, its leadership is “up for grabs.” She underlined the European Union’s strengths in the sector (science and tech mastery, collaborative science and talent, the ability to leverage the power of open source, and the booming startup ecosystems). She listed the European Union’s initiatives to “supercharge” AI uptake, with the AI gigafactories, the supercomputers, the massive public investment that will boost private investment, and the project of establishing an equivalent to the European Organization for Nuclear Research for AI. She concluded by highlighting the AI Champions initiative—a private undertaking of over 60 European providers, investors, and industry pledging €150 billion for AI—and she announced the InvestAI initiative by the European Commission, which will top these figures up by another €50 billion for a total of 200 billion.
Q8: Has the concept of AI safety really disappeared from the summit?
A8: The concept of AI safety seems to have disappeared from the official summit documents and to have been substantially downplayed in the leaders’ speeches. Indeed, days after the summit, the UK Institute changed its name to the UK AI Security Institute, dropping the word safety. During the Summit, rumors were widespread that the U.S. institute would soon do the same.
However, the issue was still very present among the various stakeholders during the side events. Professor Yoshua Bengio—a Turing Award–winning AI academic and the most cited computer scientist in the world—coordinated the publication of the first International AI Safety Report, which was released just one week ahead of the summit and which saw the participation of over a hundred stakeholders around the concept of AI capabilities and risks. Following a busy schedule at Davos, Bengio was omnipresent at the side events of the summit, discussing the new challenges and systemic risks around the most powerful frontier AI models.
At least some appear to be listening. Notably, China announced that it has established a new body that is its answer to calls for a Chinese AI safety institute, though it does not function in quite the same way as the U.S. or UK institutes.
At the same time, many companies continue to take steps that remain focused on combatting at least some of the risks from AI. Many AI companies that signed a commitment for frontier AI safety during the Seoul AI summit had complied with their commitments. However, none of these firms presented the framework at the event; rather, they used the stage at the summit to talk about innovation and showcase new AI tools and products they were launching. All in all, it seemed like the discourse on AI safety continued to happen among very motivated international stakeholders, but much more in the background and far from the center stage.
Laura Caroli is the senior fellow of the Wadhwani AI Center at the Center for Strategic and International Studies in Washington, D.C.