The European Approach to Regulating Artificial Intelligence with MEP Dragos Tudorache, Co-Rapporteur of the EU AI Act
Available Downloads
The European Approach to Regulating Artificial Intelligence with MEP Dragos Tudorache, Co-Rapporteur of the EU AI Act
Gregory C. Allen: Good afternoon. I’m Greg Allen, the director of the AI Governance Project here at CSIS.
And today we’ve got a terrific guest, a member of the European Parliament, Mr. Dragos Tudorache, who is the co-rapporteur of the European Union’s AI Act, a landmark piece of legislation still in negotiation at the European Union, but that would be the first of its kind as a horizontal AI regulation.
Member Tudorache, thank you for joining us here today at CSIS.
Dragos Tudorache: Thank you very much for having me.
Mr. Allen: So, before we launch into the meat of your work as a parliamentarian, I wanted to ask you just about your own background. How did you come to be working at the European Parliament on AI issues?
Mr. Tudorache: Well, I’m a lawyer by profession. And certainly, I’m not a – I don’t know that much about the technicalities of AI. But I remember a couple of years ago, way before I came to Parliament, I was working for the Romanian government as a minister of interior. And I read an article about AI and about how AI is going to change our societies, our economies, and also, very interestingly, how it was going to change government. And how, in fact, our states, our governments will have to learn to actually become themselves some sort of platform states, because of the way they’re going to have to deliver services to a society that will expect that kind of services to come in that form from their government.
And I thought, that’s an interesting – an interesting thought. And I think that this is something that is going to shape a lot of the work, politically that we will do in the future. Then, of course, came the shifts in terms of geopolitics in the world, and the increasing importance of technology in the global conversations. And all of this led me, at the start of the mandate two and a half years ago, to setting as an objective AI and also the crossroad between, again, geopolitics and our interests as a union, and the way we will interact with likeminded partners like the U.S., and technology.
So, the work started, in fact, with the establishment of a special committee on artificial intelligence. And I very much pushed for the establishment of that committee, and then I chaired that committee. It was called AIDA, which was a good place to educate ourselves as legislators on what AI was, what AI is, and actually understand the deep effects of AI, but also how it interacts in different sectors with the current way of doing business, let’s say. And from that work then came the moment when the commission has put forward a proposal for the AI Act. And then I have pitched to be the rapporteur for that. And that is where we are right now, in the middle of the negotiations for what, as you rightly said, will be the first piece of horizontal legislation regulating AI in the world.
Mr. Allen: Great. So here in the U.S., AI regulations in the – in the previous White House administration – the actual direction that came from the White House was to approach AI regulations with a light touch. And then on top of that, that regulatory approach has been entirely within the executive branch, there hasn’t been a major regulatory initiative at the legislative branch. And so that means that our regulations are based more on sector specific. So, the Department of Transportation is involved in regulating autonomous vehicles. The Department of the Treasury might be involved in regulating AI in the use of finance. But the EU is going after a horizontal approach. So, what does that mean, and how is it your hope that it’ll play out in the EU AI Act?
Mr. Tudorache: In fact, I think the moment that the commission started preparing itself mentally for regulating AI, I think initially they also looked at the tradition that we had in having product safety sectorial legislation and they kind of started taking a similar approach. But I think the way also the discussions in other multilateral fora were evolving in terms of producing general principles on AI, I think that’s when the commission realized that the better approach would be this sort of horizontal risk-based approach. And I happen to believe that this is the right way of regulating at this stage.
And then right now, what you have is a bit of a blend. In fact, the text is a bit of a hybrid between the approach the commission has taken so far in dealing with product safety and this sort of horizontal, risk-based setting of rules – of generic rules to apply across the board, but for specific use cases in different domains. So, it’s certainly an innovation, which of course makes the negotiations even more of a challenge. But I am confident the final product will be something that would help set a course and hopefully, if we work well – and if we also work well with our likeminded partners – would also be a good model for how we write rules of AI or so at the world stage.
Mr. Allen: So, the current draft of the EU AI Act is, you know, undergoing amendments and a negotiation process, and then you’ve got an entire set of contentious debates that we’ll dive into a little bit later, but what is the sort of basic mechanism of the EU AI Act? Can you break it down for us? You talked about a risk-based approach.
Mr. Tudorache: Well, I think the best way to imagine it is, imagine a pyramid. Imagine a pyramid of risk, basically. You have at the bottom about, let’s say, 80 to 90 percent of all AI, which in fact escapes regulation. So –
Mr. Allen: Is not regulated.
Mr. Tudorache: It’s not regulated. So there, most of the applications will continue to exist as they exist right now when the legislation got in place.
Mr. Allen: And what would qualify something to go into this low risk, not regulated category?
Mr. Tudorache: So as long as your AI does not affect the interests of individuals. I think it’s all the human-centric approach is basically the keyword and the key logic the commission has applied. So, it’s all about the interests of individuals, human rights, and the values that more or less underpin our union. And as long as you’re not touching those interests, therefore, you should not be regulated. That’s why the commission has taken this sort of risk-based approach.
Mr. Allen: So you’ve got a pyramid of risks. And it’s risks to human rights and human safety.
Mr. Tudorache: Exactly.
Mr. Allen: And low-risk items are excluded. What would go into the moderate risk, high risk, unacceptable risk categories?
Mr. Tudorache: Yeah. So, let’s start with the tip of the pyramid. So, at the tip of the pyramid, you have that AI that is touching so fundamentally upon the rights of individuals that we should simply not have them at all. At least, that’s how Europeans think. And that’s where you have manipulating, using subliminal techniques as the commission to manipulate human behavior, you have, and that’s a long a very difficult conversation that we will have, on the use of biometric technology in public spaces, with some exceptions for law enforcement –
Mr. Allen: So something like real-time facial recognition or something like that.
Mr. Tudorache: In public spaces.
Mr. Allen: In public spaces by public agencies, correct?
Mr. Tudorache: Yes.
Mr. Allen: Yeah.
Mr. Tudorache: So, these are the – and now we’re adding also predictive policing, which is also going to be quite an interesting ideological debate on whether they should be part of the prohibited practice of not. So, in this tip of the pyramid, you have these applications that we should simply not have at all. So, if you want to develop applications, put them on the market in the European Union, it should not be possible.
Then the second floor, you have the high-risk applications. And the commission is proposing in an annex, in fact – and I’ll explain why in an annex – identify several sectors where if you develop AI in those sectors, again, because the likelihood of touching upon the interests of individuals is very high, then they could qualify as high risk. And as a result of that, you’ll have to go through certain compliance requirements.
And that goes into certain documentation, that it will have to have certain obligations of transparency, you’ll have to put it in a European-wide database, you have to explain the underlying elements of that algorithm to the user. So requirements that will make that high risk, not outside the law. So, it’s not bad AI. But again, because of its impact or potential impact on the rights of individuals, it will have to be better explained, it will have to be more transparent in the way it works, in the way the algorithm actually plays out its effect.
Then you have a third smaller floor in the pyramid, which are not high risk, but they are applications that do require a certain level of transparency. For example, deep fakes. That’s the example that actually is being given as a use case by the commission in its proposal. Again, the requirements are lower than for the high risk, but still, it requires certain explainability, certain transparency to comply with. And then you have, as I said, the vast majority of applications that will go in the lower part that will not be regulated at all.
Mr. Allen: Great. So this approach really goes not sector specific but application specific and category of application specific. Now you’ve got these regulations, it has different requirements for different types of applications. Who would be enforcing these regulations, and how?
Mr. Tudorache: In the proposal of the commission, the enforcement is left to national authorities. So, each member state should establish or designate an existing authority that will be responsible with the application, with the enforcement of the text. I happen to believe that this is a model that should not – (laughs) – happen, for the simple reason that it will fragment the application enforcement, it will inject incoherence in the way these rules will be applied. And also, it will bring a lot of the security in the market for those developing AI.
And that’s the one thing and the one big objective that at least I have as a liberal politician, which is that we don’t stifle innovation, that we encourage innovation, and that we also allow companies to play out the effects of the digital single market. And you cannot have that if you end up with 27 different regulators applying, interpreting the same rules.
Mr. Allen: You don’t want some company being advised by a German regulator: oh, if you do it this way it’s completely fine, and then discovering from a French regulator, actually, you’re now subject a bunch of liabilities, even though they’re both enforcing the same law.
Mr. Tudorache: Exactly.
Mr. Allen: So that’s your goal for centralization.
Mr. Tudorache: Exactly.
Mr. Allen: And then you’ve also talked about a body that would be created to work on this. Can you elaborate a bit on that approach?
Mr. Tudorache: Yeah. My proposal in my amendments to the text was to establish what would be called European AI Center, which can – the name is less – (laughs) – important. The competencies of such an office are important. And there, the key element of such a governance is that, again, it would from there ensure uniform application of the law, uniform enforcement of the law, with the presence, of course, in a board of all the member states. Because that’s important to have, of course, the member states represented there. But also, very importantly, with the presence of stakeholders.
I think that even governments of a technology such as AI, that is going to constantly evolve, will need to be able to keep legislation as close to the reality of that technology as possible. And the only way to do that is if you have at the table of the decisions – when you make decisions you have those stakeholders that are either working in developing that technology, as well as those that represent the users, that feel the effects of the technology.
So for me, having the stakeholders around the table when decisions are made, when decisions about the amendment of the legislation are made, because as I was explaining earlier, the whole logic of keeping the use cases for high-risk AI in an annex, that allows – and it’s meant to be that way – that allows for a constant adaptation to the realities of the market or the evolution of technology of the legislation. The only way you can actually properly keep that annex in sync with reality is if you have, again, those that work with technology at the decision-making table.
So that’s the kind of governance that I’d like to see. Not only for the AI Act, but I think it’s also the kind of governance that we need in Europe for everything else that is right now being produced in terms of digital legislation because, again, if you have different boards for the different – for the Digital Services Act, for the Digital Markets Act, for the AI Act, again, it’s not going to help companies. It’s not going to help users. It’s not going to help authorities either.
And then there’s one last element which I think is important, which is expertise. It’s not simple for the public sector to attract the right level of talent and retain the right level of talent to actually have the ability to understand what happens out there, to be able to interact properly with the stakeholders, and speak the same language. If you, again, would allow – or you would leave this to the 27 different governments to be able to fight for their expertise and secure it at the national level, I don’t think it’s going to work out. So only by creating a sort of centralized governance at the European level you can actually compete with the private sector in attracting the right level of talent in this body.
Mr. Allen: Yes. I mean, in my own experience working for the United States government, we found it was incredibly hard to attract and retain, you know, the sort of sufficient critical mass of talent that you require in this extremely complicated field that is evolving so rapidly. So, the fact that you’re working on regulatory mechanisms that are not set in stone and then revisited, you know, a decade and a half later – you’ve got something that is actually built into the design that it can be updated in real time based on the outcome.
Now, I think this is – sort of relates to the lessons learned that you’ve taken away from GDPR, the sort of landmark European privacy and data legislation, which is now six years old. So can you talk a little bit about how the experience of GDPR informs the way that you designed the AI Act?
Mr. Tudorache: Well, I think governance probably is the place to start, because if we look at the – at the practical implications of GDPR and how it worked for the companies out there, you actually had very different experiences from one member state to another, because it’s linked to the – to the admissive culture in each member state, to how each authority understands its role with regard to the – let’s say, the subjects of the law. And therefore, again, very different experiences from one member state to another.
Second, if you are a company that were playing, again, out on several member states at the same time, you were ending up with very different interpretations of actually what you do right and what you do wrong. And again, that’s not something that helps out the whole cross-border element of what the digital market is all about, in fact.
Mr. Allen: Yeah, because Europe has been for a long time really sort of looking for their own technology giants, they’re really successful corporations. And it’s really hard to grow up in the European market and achieve economies of scale if there’s actually no similarity amidst all these different common markets. You know, the economies of scale depend upon a common regulatory framework.
Mr. Tudorache: Exactly.
Mr. Allen: So, you’ve built a lot into this legislation that is designed to not stifle innovation, which is the classic criticism, you know, that you make here in the United States against Europe. Is that the only know how to regulate, they don’t know how to innovate. But you’ve built mechanisms in this legislation that are designed not to hamper innovation. We’ve talked about the sort of common approach as opposed to the fragmented approached. We’ve talked about the – how 90 percent of applications are excluded. Is there anything else that you would highlight in the design of the legislation that’s designed to continue fostering innovation and ensure that regulation doesn’t hamper it?
Mr. Tudorache: I will start with the precision of the norm, because I think that’s important. I think the more precise we’ll manage to write the definitions, to write the carve outs, the applications, the exceptions, the more we’ll help actually industry to innovate in those areas that will be clear for them that they’re not covered by the legislation. But probably the most important element is sandboxes. That’s a concept that already existed in the initial commission proposal, but a bit of an afterthought somewhat in the text.
And in Parliament, we’re trying now – and I’m very much pushing for that – we’re trying to elevate the concept so that it really represents the kind of needed test bed to allow innovation to happen, to encourage innovation – to encourage companies to come forward and actually test out their ideas in an environment that is free of risk of error, in a way. Not because you cannot make mistakes, but that mistakes would be allowed because they’re interacting with the regulator. You can test out your ideas and make sure at the end of the process you are compliant with the application that you develop. And I think –
Mr. Allen: Can you – can you just give an example of, like, what is a company, what kind of sandbox environment would they go into, and how does the company benefit, how does the government benefit from this sandbox approach?
Mr. Tudorache: Yeah. So, in the current logic that we’re trying to apply to the way we’ll draft the definitions and the roles of the sandbox, we want the sandboxes to be, number one, a place to test. Number two, a place to achieve compliance. And, number three, a place that will elevate the lessons learned from that experience and that interaction between the company and the regulator to the governments, where I was mentioning earlier, so that the governance can actually be informed of the realities that the companies are faced with when actually trying to comply. And then when they adapt the rules, they adapt them in a way that makes sense.
So, what happens? I am Company A. I’m Company A and I want to develop AI, right? In an area that I feel -- either because I know that I’m targeting a domain that is right now in the list of high risk or I think that, hm, my application, and the kind of data that I’m going to put in, and the kind of scope that I want to give to this application might actually be in one of the risk categories in the legislation.
And maybe I can afford to do compliance exercises inside my own company, because I’m a big company and I have a whole army of lawyers or compliance officers, or I’m not. But even if I am, I would still feel safer if I go and interact with the regulator and actually ask questions there, I play out my scenarios there. And, again, in a direct interaction with the regulator, in an environment that, again, is free for trial and error, I can actually arrive to a product that gives me the confidence that I can put it on the market and I’m compliant with the rules, and I’m not breaking any of – or, I’m not materializing any of the risks.
So that’s the kind of environment that I see for a sandbox. And for that to work, and that’s also something that we are pushing for in the negotiations for the text, you will need to have the sandboxes pretty much everywhere. So we are advocating for a rollout of these sandboxes at the national level, but also at the regional or municipal level. We want cross border sandboxes in a way that gives accessibility to these sort of testing environments to big or small, to the young engineer with a bright mind who wants to go and play out his or her idea in a safe environment to, again, a company that otherwise may be good for compliance but would feel safer doing that, you know, direct interaction with the regulator.
So that’s the kind of environment that we see fit for encouraging innovation and achieving that objective that we otherwise always say we want, which is to actually have our own tech companies thrive in Europe and develop such technologies.
Mr. Allen: So in the United States, regulatory sandboxes have been a big driver of technology innovation, but usually it’s starting from a high threshold of regulation, and then the sandbox is a place to experiment with a reduced level of regulation. So for example, autonomous cars are banned – or, at least, were banned – on many roads in the United States. But states like Arizona or municipalities like San Francisco adopted sort of unique regulatory sandboxes whereby autonomous vehicles were allowed, as long as sort of agreed upon safeguards were in place. So, you know, can you take us a little bit deeper into the European sandbox? What is an industry that might benefit from a sandbox, and how would they use it?
Mr. Tudorache: Well, I think pretty much any industry could find uses of actually interacting in a sandbox. And if I am to take your example, it could work both for scaling up and scaling down. So you can be in a sector that is already identified as high risk, so almost by default anything you would want to produce there would be high risk. But you might actually, by going and playing out your scenarios and testing your application in the sandbox, you might discover actually that you can scale down the need of compliance and the requirements that you have to abide with, simply because when you test it and when you interact with the regulator you realize that actually what you do is not reaching that threshold of risk, as the regulation is imagining it.
Or, scaling up. So, you come in with your idea, whether you’re in the human resources management sector, or whether you’re in the, I don’t know, critical infrastructure management sector, or whether you’re in the education sector. Doesn’t matter which sector you are. Again, you can also scale up. You go in thinking, hmm, maybe there is a bit of a risk in what I’m doing because I’m doing a software algorithm that is going to help, you know, banks to analyze credit risks for clients. So maybe there is something – a risk element there. But I want to see. I want to, again, put the scenario forward to the regulator and then test a bit and see how this application would work. And then you realize that, in fact, you get to the kind of effects that would require you to go through the compliance mechanism, and therefore you have to scale up what you do.
So I think in both – in both ways, scaling up and scaling down, the sandbox can help every industry in ever sector feel better about coming out to the market with these products in terms of being compliant with the future regulations.
Mr. Allen: Great. And, the AI value chain is incredibly complicated. You’ve got all the way on the right the people who are actually providing the service to the end user, which might be an application developer or a computer software provider or something more industrial in nature. And then all the way over here, you know, we got the people who are making the raw silicon chips, right, that are driving all of this AI. And then you’ve got everybody in between, who might be producing training data sets, they might be producing AI models that then another sort of another company will customize. So, who in the AI Act gets regulated, and why?
Mr. Tudorache: I think it’s very important to retain this aspect, this whole risk-based approach. So that means that it’s already by producing that potentiality of risk that you come under the scope of the regulation. And only by allocating a certain purpose to your AI that you’re coming under the scope of this regulation. So I’m allocating the purpose. And the purpose is prone to risk. And that is when I come into the tip of the pyramid, in one of the three floors.
Which means that along the value chain as long as what I do, number one, is actually not yet hitting the market, because I’m only doing the beginning or a code that needs to be taken, trained with data to then become something. So, I am only the producer of that code. Is that code on the market? Is it producing any effects? No. Then I’m regulation free. The moment that code is taken and then fed with data by someone else, and then effected to a particular purpose, and that purpose is producing a risk that is listed in the regulation, it is that entity that has fed the data, that has done – it’s a concept that we are introducing now in the text of significant change. You are basically introducing a significant change to that algorithm, to that initial machine. And it's therefore there where the liability in that chain is going to move.
Mr. Allen: Ok, so just to walk through, you know, how you’re thinking about this, if you’re imagining a computer vision AI application – so this is something that basically starts with video camera data and outputs some kind of understanding of what’s going on in that video camera. You could imagine using this for an industrial control application. You want to look at parts and automatically determine whether or not they are too rusted to be continue being used in this, or something like that.
So, you’ve got the company that sort of makes – I just make generic computer vision software, and then the – another company, you’re envisioning, will take that sort of generic AI computer vision software, and add to it this sort of unique understanding of industrial component rust. And then they will sell that to the end user, which might be a factory operator or a power plant operator, or somebody who cares about this rust problem.
So, sort of three actors in the system – the more general provider of AI, the deployer or the customizer of the AI system who is actually selling it to the end user, who is actually deriving value from using the AI application. And in your mind, the sort of correct person to be regulated in this scenario is that middle party, the person who is making it application specific and bringing it to market. Is that correct?
Mr. Tudorache: Yes. Only that in your particular example, in fact, all three-escape regulation simply because what they do is they play with industrial AI – what I call industrial AI. I’m actually fighting to introduce such a filter in the text, so that industrial AI – i.e., in your case – it is looking at shells, or it is looking at rusted piece, and it’s going to help me optimize the way I’m actually, whatever, arranging my production flows in my company. There, you’re not impacting in any way the interests of individuals. You are not playing with personal data. And therefore, there is AI for industrial use and that, in fact, should be state regulation.
Mr. Allen: And it’s not because what industry does doesn’t matter for safety or –
Mr. Tudorache: No –
Mr. Allen: Rights, it’s because there’s an existing regulatory regime…
Mr. Tudorache: Absolutely.
Mr. Allen: …for industrial production which you believe that should the starting point for that. You’re mostly focused on consumer-facing AI, is that fair to say?
Mr. Tudorache: Exactly.
Mr. Allen: Terrific. So now the AI Act, being a landmark piece of legislation around the world, has attracted a great deal of attention. Some of the debate is quite heated. So, can you just sort of help us understand, you know, what are the sort of stages of this process and what are the major ongoing debates about the EU AI Act?
Mr. Tudorache: So the way EU works –
Mr. Allen: Yeah. (Laughs.)
Mr. Tudorache: You have to co-legislate it.
Mr. Allen: You know, we’re in Washington, D.C., so some people don’t understand anything outside Congress, the Supreme Court, and the White House.
Mr. Tudorache: (Laughs.) Let’s walk through the process. You have the commission, which has the right of initiative. So, they come with this proposal. That’s why I keep making reference to the commission proposal. So the commission puts forward a text and says: You, legislators, here it is. I have initiated this because I believe that this is a piece of legislation that is important for the EU. Then the two legislatures: Council which represents the governments of the member states and Parliament, which is the only directly elected body of the union, they are the legislators. And they need to basically work together for every piece of legislation that comes out of the EU. So at the initial stage, the two institutions work independently of each other. And we are at that stage right now.
Mr. Allen: It’s vaguely analogous to the House and the Senate in the U.S. side, although plenty of differences.
Mr. Tudorache: To a certain extent.
Mr. Allen: Yeah.
Mr. Tudorache: So right now, the Council is working in its corner with the governments of the member states to arrive to what is called a common position of the Council. So, the diplomats of the member states with knowledge of AI, they meet and they – under the lead of the rotating presidency of the Council, right now the Czech Republic – and they arrive hopefully by the end of this year to a final common position of the Council. We in Parliament, we are a political body. We have political groups, represented from left to right. And we assign – at the start of such a process, we assign a rapporteur. So, someone who leads the work on the Parliament side. In this case, it’s myself and my co-rapporteur Brando Benifei from the social democrats.
We have to work with all the political groups and find ourselves at our level a common position of parliament. Which means basically taking the text, drafting it according to our own political vision of what this text should or should not be. So, we have a carte blanche. We can play with the text as we see fit, as long as we have a majority of the political groups that support that vision. And then at the end we have a vote in the committees responsible and then a vote in the plenary of the Parliament. And once the vote is through, then you have the common position of Parliament.
Once you have these positions, then the Parliament, represented by the rapporteur, and the Council, represented at the political level by the presidency that holds the rotating presidency of the Council, then they start meeting in what is called a trialogue. And that’s when the negotiations between Parliament and Council start. And then it’s anyone’s guess how long it takes. (Laughter.) In this particular case, I hope that we will be wrapping it up by the end of 2023. And that’s how European legislation is made.
At this moment, with this text, we are, as I said, at the phase where in Parliament we are – we have started already the political consultations and the political negotiations between the various different groups. We’re making good progress. The ambition is to finish ourselves by the end of the year.
Mr. Allen: Finish the Parliament’s work by the end of the year.
Mr. Tudorache: Finish the Parliament’s work. The Council has a similar agenda, so they also wish – and I think it’s possible, that they wrap up by the end of the year as well. Which means that by January ’23, we’ll actually start that trialogue work that I was mentioning earlier, which, again, I hope will take about a year, given the complexity of the text and also the need to constantly go back to each constituencies, because everyone – even if you’re negotiating on behalf of the whole institution – you constantly need go back and then readjust. And, like every negotiation, it’s going to be a complex process.
Mr. Allen: So, there’s a ton of momentum behind this legislation, not least because of the leadership that you’ve shown in driving it. So, I think it’s extremely highly likely that some version of this is going to pass, but the exact sort of text of the legislation is sort of an ongoing debate. So, can you help us understand, you know, what are some of the major debates taking place right now around the EU AI Act?
Mr. Tudorache: I think it’s important to start with the first, which is the definition. The definition itself –
Mr. Allen: Yeah, what is AI? (Laughter.)
Mr. Tudorache: What is AI? That, in itself, has stirred quite a lot of debate, and still quite open to changes in the changes in the coming weeks and months.
Mr. Allen: Yeah, because, you know, a company – the marketing department wants everything to be AI, but the regulatory department, when they see AI is regulated, will say nothing is AI.
Mr. Tudorache: They will declare nothing as AI. (Laughter.) Of course. No, and the commission itself, in its first proposal, produced a very generic definition of AI, which some critics said was too generic. It was more or less like trying to define software in general. And that was more or less not serving the purpose of having a precise scope of the text on actually what is AI. What differentiates AI from regular software?
Mr. Allen: The EU does not have a software regulatory agency.
Mr. Tudorache: Exactly.
Mr. Allen: Yeah.
Mr. Tudorache: So, you have now two different views in the two different parts of the house, politically, ideologically. One part that supports the idea of a broad – a broader definition – kind of a catchall type of definition with the logic that what is important, in fact, is now you define those precise use cases, those precise applications that fall under the category of prohibited applications, or high risk, or those requiring transparency and, therefore, they say you can afford to have a very broad definition. And others who believe that in fact it should be a more narrow definition, more precise in scope, because that will help also innovation, will help companies being predictable and knowing what is and what is not coming under the scope of.
I think what is important here – and that’s something we haven’t discussed but maybe we touch upon – the issue of what actually happens on the global stage. I think what we need to do with this definition is get as close as we can to those definitions that are already used by other likeminded partners and in other multilateral fora, where work was done on AI. Such as, for example, the OECD, which has its own definition, which is widely accepted as a good definition of AI. And I think that is going to serve the purpose of convergence.
And as a union, even if we’d like to think that, well, we’re setting models, we’re bestowing models upon the world of how we do legislation. We did it with GDPR, so why not doing it with AI? I think this time around we have to look at it differently. We have to look at the Brussels effect differently. And understand that it is important that from the way we design this legislation we need to be closer to our partners, closer to what should be the prevailing global vision of how this technology needs to be regulated and how the rules need to look like if we want, indeed, this model to be a model that inspires others.
So I think that from the way we design this definition is something that is going to send a signal. Then other big political debates are going to be on the definitions of the prohibited practices. So what prohibited practices? We’re pointing them out. I mentioned predictive policing. It was not initially in the proposals of the commission. Was added later by us in Parliament, for example. And I think – I feel like most political groups will want to keep that in. There’s, of course, a discussion on what will end up in the annex of high-risk applications, understandably, there are a lot of interests that want to carve themselves out – (laughs) – of the list, so that they escape regulations. And again, there’s going to be quite the debate on which will make the final cut, and which won’t.
And then there’s a lot of discussion on the kind of conformity assessments that you will have. Right now, the text proposes self-assessment and I believe in that. I’m going to continue to advocate for self-assessment.
Mr. Allen: Can you expand on what these assessments are and how they relate to the regulation?
Mr. Tudorache: That means – that means that I am responsible with checking compliance with the rules myself. I don’t have to go outside and bring a certification body to – because that means an additional burden. And with the logic that we want this legislation to be as burden-free as possible for companies because we want to encourage innovation, because we want AI to be powering up our economies and not actually be a drag.
So, in that logic, self-assessment, something that the companies take upon themselves to do, is something that I believe serves both the purpose of having rules of AI but also, again, allowing companies to do it themselves. There are others in Parliament who believe actually that you need third-party assessment because that gives additional guarantees that compliance actually happens. So again, my role will be to the final the balance between these.
Mr. Allen: Right. I mean, in many industries third-party assessment is a big deal, and an important deal. I think about airplane manufacturing. But if you’re thinking about, you know, a small AI startup with three employees, regulating them the same way that you would regulate an airplane manufacturer, with third-party assessments, might not be a good fit, right, depending on the nature of the application.
Mr. Tudorache: Yeah.
Mr. Allen: So, we’re running low on time. And I did want to get back to what you said about the international dimensions of this. You know, with GDPR, there were many in the European Union who were really excited about the so-called Brussels effect, whereby European Union legislation would sort of have a ripple effect across legislative approaches throughout the world. You mentioned that you want a different approach for this one. And I just want to commend you and thank you for coming to Washington, D.C., and for all of your prior engagements and cooperation with the United States and other likeminded partners.
Which sort of brings me to my question. It seems extremely likely that the European Union is going to be first in terms of major horizontal AI regulatory legislation. The U.S. will probably have a different approach. Other countries, like Japan, might have a different approach. But what do you believe if worth being common? What are the aspects of AI regulation where it really is important to cooperate and collaborate between likeminded partners?
Mr. Tudorache: I think the fundamentals need to be aligned. I always say that we operate with the same values. And I think that’s what differentiates us from those that understand, in a way, the role of technology and society differently. And I think that if we have these values aligned, if I have the fundamentals aligned, already the first step is there. And I am confident that it happens already. And I think it was affirmed already as part of the first legs of the transatlantic TTC, the Trade and Technology Council. So, we’ve made sure that, again, the bedrock on which we build rules, standards, future work on AI, is a shared bedrock.
Then the second thing, it’s almost like the extremes, is the standards. So, at the next level of regularity, you get the principles aligned, you get the values aligned. The next thing that we can do in common is standards, because we understand that in between – what is in between cannot converge because we simply have different traditions in how we write norms. We, in Europe, we write these norms. We rush with legislation, some say it’s good, some say it’s bad. But that’s our tradition. Here, that impulse is not the same. It’s different.
But I think that even if the form differs, even if normatively we’re not going to have the same products, as long as the values are shared and as long as we’ll work together on standards so that our companies, whether on the left or the right side of the Atlantic, will actually develop their applications of AI working with similar standards, I think the overall objectives are going to be met. And then, of course, we will have to replicate that work with other likeminded partners around the world, because it’s not only Europe and the U.S. in this game.
We have Japan. We have Australia. We have India. We have South Korea. We have all those other democracies that, again, share the values. They share the bedrock. And with which we’re going to have to ask work together on standards. And we’ll have to figure out a way to open up this common work that we do right now under the TTC on standards, also to them to create the frameworks, the governance for it. Because I think that if we manage to do that, then we’re going to have a model of how the rules for the future AI in the world is, a model that will be befitting for all of the democracies out there in the world.
Mr. Allen: That’s great. So, I just have one final question for you, which is: Imagine it’s sort of 10 years from now and you’re asking yourself – let’s assume the EU AI Act has passed in some form. In 10 years, what would you be looking for as the evidence of success or failure? What would tell you whether or not it worked as intended?
Mr. Tudorache: I think it’s going to be now comfortable we will all be as individuals with the world around us. Because the world around us is going to be pretty much powered by AI. That’s inevitable. But I think, again, the level of comfort that we will have and the adaptation that we will have to that new society, to that new digital society around us, with its rules in terms of human interaction, in terms of interacting with the public sector, with our governments, with our local authorities, in the way that we’re going to interact with the companies, with the private sector. So all that, new world in which we’re going to live is going to a place that will be fitting with our interests, and our desires, and our aspirations if we get these rules right.
So if, in a way, without allow technology to maybe grow in directions that are not, again, responding to those expectations we have. And this is what I think sets us apart from other models that we have to be aware with that, from other models of using technology which are not in the interests of citizens. AI that is being used to control society, to control citizens, to put them in different boxes according to the interests of the government and not according to the interests of the individuals. So because we have this very different opposing models, again, we have an interest to set the course for how this technology evolves. A course that, again, fits with our vision of the world. And I think if in 10 years’ time we’re there, and if we are satisfied with the world we live in, it means that these rules and these standards, we did a good job writing them up.
Mr. Allen: That’s terrific. Mr. Tudorache, thank you so much for coming to Washington, D.C. and spending the afternoon here at CSIS. I wish you good luck in the trialogue and the remaining negotiations in the Parliament.
Mr. Tudorache: Thank you very much. And thank you for having me.
Mr. Allen: And that concludes our event here at CSIS. Thank you all for joining us and enjoy the rest of your day.
(END)