Inside Europe’s AI Strategy with EU AI Office Director Lucilla Sioli

Available Downloads

This transcript is from a CSIS event hosted on August 28, 2025. Watch the full video here.

Laura Caroli: In an era of geopolitical competition on AI and of a race between the United States and China, Europe is often perceived as lagging behind. And its voice seems increasingly silenced. But is it really the case? Is Europe only about regulation?

My name is Laura Caroli, senior fellow at the Wadhwani AI Center here at CSIS. Today I have a very special guest joining us at CSIS to answer this question: Lucilla Sioli, director of the EU AI Office, connecting from Brussels to help us better understand Europe’s AI strategy and Europe’s place in the AI race. 

Prior to becoming head of the AI Office, Dr. Sioli was director for AI and digital industry within the European Commission. She holds a Ph.D. in economics from the University of Southampton and one from the Catholic University of Milan, Italy. And she has been a civil servant with the European Commission since 1997. Dr. Sioli, welcome to CSIS. And thank you so much for agreeing to speak to us today.

Lucilla Sioli: Thank you all for inviting me.

Dr. Caroli: Let me start with a personal question: What is the path that that led you to this role that you have today?

Dr. Sioli: Well, the – (laughs) – as you know well, because you were also involved at the time in some of these inter-institutional discussions in the European Union, the AI Act, which was agreed a couple of years ago by the European co-legislators, foresaw the possibility to set up an AI Office in European Commission. And so about a year ago, this office was set up. And the objective of the office is to make sure that Europe has the means and the capability to develop artificial intelligence from the point of view of research, innovation, and across, let’s say, the supply chain or the steady AI stack, let’s say, that is necessary for the development of AI. 

And at the same time, is responsible for the implementation of the AI Act. As we look at the rules of trustworthy artificial intelligence and we think that it is imperative for innovation to make sure that there is trust in the technology. So we look at the innovation policy and the trust policy as two sides of the same coin. So we engage in both the development of innovation and the implementation of the AI Act. We’re also responsible for the international engagement. And we pay particular attention also to the development of technology for the Global South. So this is, in essence, the task of the European area office, which is, as I said, set up in the European Commission in a department called DG Connect.

Dr. Caroli: So it has a broader structure than what people normally think. It’s not only about overseeing rules, but also about innovation, right?

Dr. Sioli: Yes. I would say that, you know, we have at least three, four units out of six which engage in the development of policy, in funding research and innovation at the European level in AI. We put in place the policy and the strategy of the AI Continent Action Plan, and maybe I have the possibility later to tell you about the next steps also in that sense. So it’s very important to think and to remember that the AI Office is generally – has a general objective of stimulating the development and the use of AI in the European Union because we think AI is very important for our economy and society.

But at the same time, we think that in order to make sure that businesses and citizens use AI we have to make sure that people trust the technology. And this is the reason why we look at our rules on artificial intelligence as an integral component of our innovation policy.

Dr. Caroli: OK. Thank you for that.

Let’s talk, indeed, about the innovation side of it, because here in the U.S. everybody is focusing now on the very important document – policy document, the AI Action Plan that the U.S. administration released one month ago. But Europe, indeed, has also its own AI Continent Action Plan that was released in April. And comparing these two documents, one can see some differences, some similarities. Can you walk us through the AI Continent Action Plan?

Dr. Sioli: Yes. Let me – yeah, let me first, indeed, tell you about the main priorities of the European AI Continent Action Plan.

In this action plan we, first of all, look at the infrastructure that is necessary to develop artificial intelligence. So we proposed the development of what we call AI factories and in the future gigafactories, which is basically certain services that are offered leveraging the computing power that we have available in European Union. You have to think that, differently from the United States, in European Union we can benefit from a public network of supercomputers, which are really leading supercomputers. Like, four of them, for example, are among the top 10 in the world. And we make these supercomputers – we upgrade them to AI capabilities through more GPUs, which are these special chips for artificial intelligence. And we also – and we create what we call AI factories, which is basically supercomputing centers that also offer certain services in terms of training, in terms of engineering services, and so on.

And then we propose also for the future the setup of what we call gigafactories. These are very advanced supercomputers integrating with datacenters. We want to see at least four or five of them in European Union. And these are, you know, massive supercomputing power that is needed to train the very advanced artificial intelligence models.

Now, the AI factories will be public, so they are basically available. The supercomputing capacity and these services are available for free to universities, research, but also to startups and SMEs; while the gigafactories will be a bit different because the public sector will only support a part of them, and so most of them will be managed by private actors. We had a call recently to see who and which private actors in European Union would be interested in investing in gigafactories, and we were really surprised and even overwhelmed by the replies we had. We had more than 76 expression of interest for the gigafactories, which is a very big number. Member states of the European Union are directly interested themselves. And this really shows that there is a lot of interest in investing in artificial intelligence in European Union.

But then the other step, of course, once the models are trained, then we also have to think about the adoption and the use of AI in our economy. And so we announced an Apply AI Strategy, which will be published next month and which will really look at removing barriers to the adoption in the strategic sectors of the European economy like health care, for example, which is a sector which would be greatly, greatly impacted by artificial intelligence; but also robotics, manufacturing, and, you know, the main strategic sectors of the European Union.

We also look at skills. And we look at facilitating access to data. Now, this is – many of these areas are not very different from what you can find in the U.S. action plan. For example, we – both in the European Union and in the United States, we encourage the development of open-source artificial intelligence. 

We want, as I said earlier, to stimulate the adoption of AI by the public sector as well. And we want to create centers of excellence, regulatory sandboxes. We do that through the AI Act, but this is very much – you know, in every member state of the European Union there will be one – at least one regulatory sandbox on AI. And we both pay a lot of attention to the possibility of testing and experiments in AI before it is put on our markets.

On both sides we look at workers, we look at skills, and we support education programs. And we are very interested in working on AI in science and insights for AI. I already said that we fund research in AI, but we will also be publishing a strategy on AI in science together with the strategy on adopting AI in the economy. So there are many, many similarities in our actions. And it is very – indeed, very interesting to see and to compare, because I think at the end – at the end we all have the same objective, which is to make our economies grow and be more competitive and be more sovereign, including through artificial intelligence.

Dr. Caroli: Thank you. It’s very interesting to hear your opinion about the many similarities in the two plans.

But then let’s get into the real question. The eternal debate: Is regulation what really blocks Europe from being a leader in innovation? What is your opinion on this debate?

Dr. Sioli: Well, I think that the regulation of artificial intelligence – well, I don’t see that piece of regulation as impediment for the European Union, for investment in AI to be delayed relative to other parts of the world. I think that the issue is probably a historical issue of our markets. It took some time, of course, for the European single markets to become a single market. And, you know, we – the European Union is made of 27 member states. In some areas, for example, employment, but also in capital markets, there is still a lot of fragmentation in European Union. Which is an issue for being able to scale up innovation in general. 

And so basically the big platforms are mostly in the United States. We do have platforms in the European Union, because we do have Spotify and others, Booking.com and others, in European Union as well. But the biggest platforms are in the United States. And I think that the delay has taken place because more of an issue of fragmentation of the European markets than anything else. Now, the developments of AI obviously also benefit from the existence, for example, of platform because it means also existence of cloud computing, of large amounts of data, and other resources being already available. And these are the key resources for training artificial intelligence models. And that’s why the European Union has experienced a delay as well.

And then, last but not least, I think that in the European Union we need to become stronger at creating ecosystems. You know, there is more of a digital ecosystem in Silicon Valley, in the area of Seattle, in certain parts of the United States, than we have in the European Union. And this is certainly an area where we’re working on, and which we hope to be able to strengthen, including through our initiatives on the AI factories.

Dr. Caroli: Thank you for that.

And indeed, even though ecosystems are more granular, more spread around Europe, there are some notable initiatives happening on AI. Can you give us some examples that you see from industry?

Dr. Sioli: Well, certainly the number of replies of expressions of interest received in terms of gigafactories is a huge example of the wish to invest in artificial intelligence in Europe. And this has come out by single actors, but also groups of actors, which means that ecosystems are forming between energy players, telecom players, players in artificial intelligence, players in in other sectors. I mean, there are – in Europe you have organizations in the retail sector, which are very strong in digital and are even among our strongest digital players. So I think there are many examples.

I can tell you that the reason why we also push for gigafactories is because our supercomputing of AI factories is already completely full by demand for developments of artificial intelligence. So we don’t have a demand issue here, which really shows the interest and the amount of developments that are taking place. So I, you know, don’t want to point out to one or the other company, but there are very interesting developments taking place in European Union. And I hope that we will be able, with our policies, to support them even further and make sure they remain, and we retain the talent that we have in the European Union.

Dr. Caroli: Thank you.

Do you – how long do you think it will take to set up the gigafactories? Because it’s a very interesting project, but we want to know more.

Dr. Sioli: Yes. So we had a call for expression of interest. And in June, this was finalized. Now we are discussing with the main actors and member states, because the gigafactories will be partly supported by the public sector including by the member states, and we will have a real call by the end of the year. And then, you know, in the course of ’26, we will be able to make the selection. And GPUs will need to be acquired. We will see how long it takes to receive the GPUs. And the gigafactories will be able to start operating. 

Dr. Caroli: OK. So it will still take some time. 

Dr. Sioli: It will take some time, but as I said, we have the supercomputers already in place. We have the AI factories which are already upgrading. So we have exascale supercomputers. And so we do have an infrastructure that is quite good to bridge this time until the gigafactories are actually in place.

Dr. Caroli: Great. Thank you so much. Very informative.

Now let’s get to a point that many in our audience will look for in our interview, about regulation. So the European Commission recently released just one month ago a major policy document to implement the EU AI Act, the Code of Practice. Can you walk us through, like, what the Code of Practice is, and what the response was to the publication of this final draft in industry, civil society?

Dr. Sioli: So the Code of Practice is a voluntary mechanism for companies to sign up to, to implement a certain part of the AI Act, in particular the part which relates to obligations for providers of general-purpose artificial intelligence models. And there are transparency obligations that are foreseen, as well as obligations for the most advanced models, that we call the general-purpose models with systemic risk. We have invited international experts. We have appointed – identified and appointed 17 experts to come up with a Code of Practice which would be – which would detail how, in practice, deployers of artificial intelligence models can respect the obligations in the AI Act.

So the Code of Practice is a very practical way, as the term says itself, of making sure that the models are compliant with the Artificial Intelligence Act. As I said, it’s not a document written by the European Commission but it’s a document written by independent experts that the European Commission and the member states of the European Union have recently endorsed. So they have recognized that it’s adequate as a mechanism for demonstrating compliance with the AI Act. And it was published at the end of July.

Now, we had more than 25 companies signing up to express their intention to follow the Code of Practice, which means that when they would be putting models on the market of the European Union they will have to follow certain steps which are indicated in the Code of Practice, which will be an indication also for us regulators that they are compliant with the rules of the AI Act. So these are not rules that apply yet to all the models on the market, but for the moment it’s for the models that will be placed on the European market after 2nd of August 2025. For all the models which are already on the European market, then companies have more time to show compliance with the AI Act. And they will have time, actually, the summer of 2027.

Dr. Caroli: Yeah. Thank you for specifying this because there has been a lot of confusion over this area of AI.

In particular, I would like now to address another element of confusion: What happens to a company that doesn’t sign the Code of Practice? Because we’ve read conflicting arguments there.

Dr. Sioli: OK. So those who don’t sign the Code of Practice still have to comply, obviously, with the rules and the obligations of the AI Act, and they will have to demonstrate their compliance in other ways. And we will be asking them how they are complying with the AI Act. So the expectation is that they have in mind different ways for compliance.

There is no obligation to follow the Code of Practice. Instead, it’s a voluntary mechanism. It just simplifies things for everybody. And in fact, I have to say that the leading providers – like OpenAI, Anthropic, Mistral, and others – have all signed up to the Code of Practice. Those who didn’t sign up will have to find alternative ways to show compliance.

Dr. Caroli: We also heard that companies which signed will benefit from an increasing bond of trust by the Commission and maybe they will be more free to implement these measures internally. Is it – is it accurate?

Dr. Sioli: Well, let’s say that those who sign are demonstrating willingness to show compliance and to respect, of course, the rules of the European Union. And for us, it means that there is a certain trust that goes in both directions. I mean, all these companies have also participated to the Code of Practice, so they know the content. And they have participated proactively, so they know very well the content of the Code of Practice.

And so there is a certain amount of trust, and it’s much more simple for us sitting in Brussels also to be able to check. It’s like a checklist, basically, the Code of Practice, whether people are complying with the obligations or not. If a company decides to comply in other ways it’s also more complicated, so we will have to ask more questions to understand how these obligations are respected and implemented. And so it’s not anything particularly unfriendly – (laughter) – because there is no reason to be unfriendly; it’s just that there is a different kind of dialogue and conversation because the information – there will be more information that will be needed by our staffs to process.

Dr. Caroli: Thank you so much for clarifying that, because I really feel it was necessary. There was a lot of confusion among stakeholders.

Another – (clears throat) – sorry – tricky question about the AI Act is the rumors about possible delays in its implementation or a possible simplification of the rules. What can you tell us in this regard?

Dr. Sioli: So, you know, we – the AI Act is characterized by the entry into applications of different sets of rules at different points in time, which may appear a little bit complicated. So, in February, the provisions – because we do have a very limited number of uses of artificial intelligence that we actually prohibit which apply, and they entered into application in February. On the 2nd of August, the rules on general-purpose artificial intelligence models applied, entered into application.

The next step is for 2026 we have the entry into application of the rules related to high-risk systems, as well as the transparency rules. Now, the ones on high-risk systems are rules which are based on the use of standards. So, basically, if companies develop or intend – or intend to put on the European market artificial intelligence systems which may bring certain risk from the point of view of violation of fundamental rights and safety, then they have to show – before they place them on the market they have to show that they have taken certain steps to make sure that these systems are, you know, compliant in terms of having had a certain data governance or human oversight, the system is robust, and so on.

And to facilitate this kind of assessment, which most of the time is actually self-assessment, the European Union has decided to put in place set standards. These standards are developed by standardization organizations like CEN and CENELEC, which carry out standardization activities which are also open to the participation of companies from other parts of the world. I would like to underline this because there have been companies from all over the world participating to the discussions in CEN and CENELEC. And they have time until the end of August to let us know how much progress they made in the development of the standards. So we are now analyzing where they are with the development of standards, and we need to make an assessment to see whether the standards are formally rated for companies to be able to implement them with a view to put their systems on the market in the summer of next year.

So we are still making this assessment, and there are companies asking to postpone these rules because they don’t think that the standards are going to be ready. As I said, we are making this assessment and we will come out – come back very, very soon to this issue to let everybody know what was – what the Commission has decided and what the Commission will propose. Because let’s not forget that this is – this date is part of the legislation, so the Commission will need to make a proposal to the other co-legislators and together they will have to decide if they want to postpone the date or not. So this discussion is ongoing, but the proposal of the Commission should come relatively soon.

Dr. Caroli: Thank you for this very comprehensive explanation. I think it’s very important to underline that U.S. companies, even, are participating in standardization in Europe, so they are also active in Europe in that area.

And thank you for explaining the process. I think many of us will be watching very closely in the upcoming weeks and months about your assessment.

And for the audience, when we talk about co-legislators, we talk about a procedure involving the parliament and governments of the member states in the Council that will need to negotiate together to eventually decide if this is the case, a possible postponement of the dates of implementation, right? Yeah.

Dr. Sioli: Indeed.

Dr. Caroli: Let us now move to trans-Atlantic relations. Here in the U.S., the administration perceives that Europe is described more as a continent of fear and overregulation. How would you react to this perception on AI specifically?

Dr. Sioli: Well, as I said from the very beginning, I continue to believe that there is no tradeoff between innovation and regulation, and I believe that trust in AI is an important element of the innovation process of artificial intelligence. So we – you know, we have – when we made a proposal for the Artificial Intelligence Act, we had very much in mind that we were actually regulating the use of a technology and paid a lot of attention that these rules would not hinder innovation.

First of all, the approach is based on risk. So we only address our rules to those applications of artificial intelligence that bring certain risk. And there is a risk of use cases in the annex of the AI Act. So I would say that 90 percent of the artificial intelligence market is not regulated, so this is very important to keep in mind. The rules are really only about those applications in our opinion can bring about risk that are particularly impactful for our society.

Secondly, we don’t regulate research and development, for example. This is exempted from the regulation of the – of artificial intelligence.

Third, we have one set of rules in European Union. You have to think that the alternative to having the AI Act would be maybe to have 27 AI Acts. So it’s very important to be able to have one set of rules instead of fragmented rules across different states.

And last but not least, again, I want to go back to the replies – (laughs) – that we received for the gigafactories, which really show that the AI Act is understood. Its impact is understood and there is a lot of willingness to invest in AI.

Dr. Caroli: Yes. Thank you for that. Indeed, we see here in the U.S. now the risk of fragmentation – of having states regulating, each in a different way. So it’s a bit – (laughs) – a part of the debate on AI here, on AI policy, and a tricky issue in the U.S.

But talking still about U.S. relations, what is the state of cooperation on AI right now?

Dr. Sioli: Well, we continue to exchange with the United States. We are allies, you know, and we – you know, we continue to collaborate and have a dialogue in our trans-Atlantic relationship.

Maybe one area that will be interesting to underline is the fact that we continue to collaborate, and this is where – maybe where the collaboration has been the closest most recently, through what we used to call the Network of Safety Institutes. I say we used to call because, as you know, in the United States and in the United Kingdom the name was changed from safety to security, but it was not changed in other countries like the European Union and other, Asian countries. So we are reflecting collectively, you know, what is the new name that we could give to the network.

But just to say that we met recently in Canada, and we continue to have joint testing, for example, of models. It doesn’t mean that each one of us is doing the same thing. I mean, we all have very different profiles and very different task(s) and objectives. But we are all interested in moving forward the science of evaluation. It’s important to be able to evaluate these models that are coming up on the market. I think it’s in everybody’s interest, including the development of standards in relation to these models. And therefore, we have common objectives. And our collaboration is very important because we exchange methodologies, we push the science forward. And, as I said, we also participate in joint testing exercises, when we want to, you know, understand behavior of models, maybe in different languages, or in different settings.

Dr. Caroli: Thank you. It’s very informative and also reassuring to hear that the work of the AI Safety Network is still continuing.

Still talking about the global – the global scene, China has recently taken a more proactive stance in publishing, just three days after the U.S. AI Action Plan, a Global Governance AI Action Plan. How do you perceive this renewed stance of China as an inclusive and benevolent leader in global AI cooperation? 

Dr. Sioli: Well, frankly, I do not know yet how these initiatives by China will actually shape up and develop. So any kind of international collaboration or proposal for international collaboration needs to be seen positively, because it’s a dialogue, and a dialogue has to be inclusive if it has to be effective. On the other hand – and so from this point of view, it’s a very good idea. 

On the other hand, however, being responsible in the area of AI doesn’t only mean to make sure that a certain model is safe, from the point of view of, I don’t know, biological or nuclear impacts. It is also about the way we deal with data, with personal data. So for me to have data protection policies is very important. And copyright is very important because, as you know, data are used for the training of models, and copyright rules should be respected. So I would like to see that also these kind of elements are brought up to the international discussion, although I understand that that could be uncomfortable maybe for certain parts of the world.

Dr. Caroli: Yeah. There are still, I think, many differences among us when it comes to details.

What about the Global South? The Global South often is left out of AI global conversations, but more recently is really stepping up to be more involved in AI policy discussions. What do you think about this?

Dr. Sioli: Well, as you say, most recently these issue is becoming very, very important. And this is obviously in the work that is taking place at the United Nations with the Global Digital Compact, and the scientific panel, and the various steps that are being taken at the United Nations. In the European Union, we are – try to be active on the Global South. So we have an initiative that we call Artificial Intelligence for the Public Good, which is about developing algorithms in certain areas of public interest. For example, recognition of cancer, for healthcare, or disaster – you know, algorithm for disaster management, preparedness. 

And the idea that we have is that these algorithms could be given to the Global South, so they could be retrained through data from the Global South, and therefore they could be used in the place. And so there is a need to develop, of course, the skills and capacity in the Global South, but there is also a need to share the development of certain algorithms that can have a very important impact in the presence of the climate events that we are leading, the healthcare disruptions that we may be leading. I think this is going to be extremely important for the future. And the European Union is very active on this, not only in the United Nations but also in the G-7 and in other fora.

Dr. Caroli: Thank you. And with the India summit coming very soon, I think this role of the Global South, and impacts globally, will be even more felt, like, in the policy debate on AI.

But now we are almost over our time limit, so I would like to ask you: What message do you want to convey to our audience today?

Dr. Sioli: Well, I would like to tell the audience that Europe is gearing up for a lot of investment in artificial intelligence, which is going to bring about important developments. We have the talent, because we have excellent researchers. We have probably more engineers per capita than any other continent in the world. We have about 7,000 startups in AI. We have the networks of public supercomputers, I mentioned earlier. And we are improving, of course, that computing power. So I think that we are creating an ecosystem and an infrastructure that is very attractive. And that very often people talk about the race between China and the United States. I’m not sure the development in AI should always be seen as a race, but I want to flag the fact that the European Union is gearing up to participate and to be an important player, and one of the global leaders in this area.

Dr. Caroli: So Europe wants to be in the AI race too?

Dr. Sioli: As I said, it doesn’t need to be necessarily a race. For me, what is important is that the European Union has the capability, and the talent, and the strength that it deserves in its economic performance. And it’s an important market. I don’t think – you know, we are allied with the United States. Maybe we have a different relationship with China, but we are allied with the United States. And I don’t think we should compete with each other. I think we should be in a relationship, in a dialogue. We may have differences in certain views, and it’s perfectly normal, but we can invest together, and we can bring forward these developments together. I think it will be essential for the future, I think, on both continents.

Dr. Caroli: Thank you so much, especially for this message of unity and, you know, of hope. (Laughs.) Thank you so much, Director Sioli, for talking to us today and for connecting from Brussels. 

This concludes our event for today. Thank you to the audience for watching. Please stay connected with CSIS and our Wadhwani AI Center content. And have a nice rest of the day.

(END.)