EU’s Perspective on Digital Regulations with the Lead Negotiator of the Digital Services Act
Photo: mixmagic/Adobe Stock
Available Downloads
This transcript is from a CSIS event hosted on February 26, 2025. Watch the full video here.
Laura Caroli: The Trump administration has been putting European legislation increasingly under scrutiny, especially laws on digital services, claiming it almost amounts to a form of digital censorship. This only – this not only puts transatlantic relations at strain, but also makes enforcement of European laws increasingly difficult. Now the question is: Is Europe really backtracking on freedom of expression? To answer the question we will have a very special guest here today, and I’m very excited to have her here.
First of all, my name is Laura Caroli. I am senior fellow at the Wadhwani AI Center here at CSIS. And I have had the opportunity in my previous life at the European Parliament in Brussels to experience the law that we will talk about today from the outside. We are talking about the Digital Services Act, the law that is under the highest scrutiny from the Trump administration.
Today I am very proud to have the opportunity and to give you all the opportunity to explore this law from the inside with the very woman who is behind it, behind this law. And this is Christel Schaldemose, vice president of the European Parliament, and a long-standing member of the parliament for almost 20 years. Congratulations.
Christel Schaldemose: Thank you.
Dr. Caroli: And from the Danish Social Democrats. You have been the lead negotiator of this law, the Digital Services Act. Can you explain to us what your role was during this negotiation?
Ms. Schaldemose: Yeah. Creating laws in Europe can be seen as a little bit, you know, difficult to understand. But we have three main institutions – the one who proposes the law, that is the EU Commission, and then the two institutions who negotiate it. The one is the Council, where all the ministers representing EU’s 27 member states are. And then the European Parliament, where we directly elect representatives from the member states to be member of that. And we are 720 members of the European Parliament. So once the EU Commission has proposed a law, a text, it is sent to the two institutions. And each institution find out what is the position, the mandate. And then we sit together and negotiate the final piece of legislation.
This process lasts, when it is fast, a year. And sometimes it lasts longer. We did it in a record time with the Digital Services Act because there was this sense of urgency and a willingness to try to end the wild west, as we saw at that time. We have, you know, a lot of digital companies. And we benefit very much from the services. But we also wanted to have a fair competition and we wanted to protect users of the platforms. And that’s why we made the Digital Services Act.
Dr. Caroli: So that’s also exactly why you decided to – so, the purpose behind the Digital Services Act is to ensure fair competition, a level of playing field, if I understand what you’re saying, and users empowerment to choose what they can do with their content online, right?
Ms. Schaldemose: Exactly. I mean, it comes out of the fact that we had another regulation in the EU called the E-Commerce Directive. And that was adopted back in year 2000. And, as everybody knows, it was before Facebook was established, Amazon was still just a very small company who couldn’t even earn money. And, you know, none of us knew what would come out of these fantastic platforms. So at that time we had no rules for them, and decided that they were exempted from liability, you know, like what we have in the United States with Section 230.
But we saw more and more a need to do something to create a level playing field for companies, platforms, but also to protect users, and citizens, and society. And therefore, we decided to not just revise the E-Commerce Directive, but make a completely new law. And that law is named the Digital Services Act.
Dr. Caroli: Thank you. So, indeed, the Digital Services Act is just a continuation, if I understand correctly, of what was already in place in Europe, this E-Commerce Directive of 20 years ago. But so how much would you say the Digital Services Act is about transparency, as opposed to imposing practices to platforms?
Ms. Schaldemose: Well, it’s a combination of many things. What we’re doing is that we’re, in fact, regulating the level playing field for intermediates. It could be Amazon, you know, online marketplaces, it could be search engines, or it could be social media, et cetera. So what we tried to do was to say that the bigger a platform is, the bigger obligations are we putting on them. And the law is containing a lot of obligations on transparency, but we have also said that there are certain practices that we will not allow, or at least they have responsibility for it. So recommended systems. We need transparency around them. Algorithms – well, now let me start another place.
We kept on saying, and we still believe, that the post – now we’re talking about social media, for instance – the post you and I are putting on Facebook, or Instagram, or X, or whatever we use, is nothing – the platform is not liable for what you and I post. But the way they use what we post they are responsible or accountable for, and that is an attempt to make sure that with all this content we would, at one hand, ensure freedom of speech. That’s a fundamental right in EU. It is established in the treaty so it is very important for us.
But on the same side we also want to protect society, democracy, et cetera, so we wanted to have a level playing field and, therefore, we said that they need to be responsible, accountable for their algorithms.
So they have to make impact assessment on a yearly basis and if those impact assessments shows that there is a systemic risk towards, for instance, public health or democracy then the platforms will be obliged to risk mitigate.
So we’re not saying what they should do. We’re, for instance, not asking them to make – to fact check but we’re saying that if there is a risk of disinformation or public health or whatever then they have to risk mitigate, and we think that this kind of arm length principle is very important because that is also a way to make sure that we are not suppressing the freedom of speech.
But on the other hand, we also need to make them accountable. They earn a lot of money. There are risks for people and society and, therefore, we need to balance as we always do in Europe.
Dr. Caroli: Thank you. This was a very dense answer.
So you said no fact checking per se but mitigation, so more or less be aware of the risks that can be – can come from having such a big audience because now we are talking about the very large online platforms and very large search engines, right?
Ms. Schaldemose: Yes.
Dr. Caroli: How does this work exactly?
Ms. Schaldemose: So, for instance, if you have a very large online platform you need to have more than 45 million monthly users and we have approximately 23 very large online platforms at the moment that are designated as VLOPs.
Dr. Caroli: Can I ask one question? Are they all American?
Ms. Schaldemose: No, they are not. It’s also TikTok, for instance. So no, they are not.
Dr. Caroli: This is very interesting, I think, for our audience it’s not only American companies.
Ms. Schaldemose: No, and I know – and I’ve heard also this week here in Washington that there is this tendency to believe that these European digital laws are made in order to keep American companies out of EU or damage the competition. But that’s not the case. Not at all.
We are very happy with a lot of new technologies and new services and we use them quite a lot. But because we use them a lot they also have an impact on society and that’s why we wanted to have a kind of rulebook to make sure that it works in a proper manner.
And no, it’s not just American companies. It’s also Chinese and Middle East, et cetera. So you have several – and Europeans.
Dr. Caroli: Oh, yeah, indeed.
So, indeed, we have this – now with this framework, what if what we are hearing increasingly from the Trump administration – for example, we had this very recent speech from Vice President J.D. Vance at the Munich Security Conference, who were – who was voicing concerns that Europe might be backtracking on freedom of expression and pursuing digital censorship. He literally used these two words, “digital censorship.” These are very strong words. So, like, how would you react to this? Is Europe really backtracking on freedom of expression?
Ms. Schaldemose: No, no, no. I don’t know if I think it is a misunderstanding or a deliberate way of, you know, talking down to Europeans. I thought it was, you know, completely off.
The freedom of speech in Europe is very good, lively. We have many types of debates, and I want to say very clear that the DSA is not about the content. So we are not harmonizing what is illegal and what is legal in the member states.
So, you know, EU can be a little bit complicated. So we have 27 different member states and they can make laws. So, for instance, in Germany it is illegal to deny that Holocaust took place for very good –
Dr. Caroli: Reasons.
Ms. Schaldemose: – historical reasons. But in my home country, which is Denmark, it is not illegal. So the platforms, if this – if somebody posts something about that, in Denmark we would not react. We might go into a debate with the people and try to come up with evidence about it or discuss it. But in Germany, if notified, for instance, to X, X would be obliged to take it down because it’s illegal.
So EU is not saying what is illegal and not legal – legal or illegal, but we are stating in the DSA in case a platform is notified then they have to take actions and take it down. But you know, that’s – there are also things in United States that are illegal, and as I recall U.S. even decided to ban TikTok, at least for some hours.
Dr. Caroli: That is true.
Ms. Schaldemose: So, I mean, we haven’t banned any platform in Europe.
Dr. Caroli: Yes. So that’s also very helpful for our conversation, I think.
Indeed, I remember during the debates that we were having at the time, that lasted a long time – (laughs) – so I remember them quite vividly, one of the mottos we were using all the time was what is illegal offline is – should also be illegal online. Like, I think this is more or less the message you are also explaining to us today.
Ms. Schaldemose: Correctly.
Dr. Caroli: And I also wanted to flag another thing. The Federal Trade Commission recently launched an inquiry on big tech platforms, like, just a few days ago, and basically they claim that they are investigating obscure practices of these platforms that take users’ content down without the users being able to appeal the decision. Now, again, remembering what we were saying at the time, this looked really familiar to me because we were also talking a bit about this user empowerment. So do you think this is all like a big misunderstanding? Like, do you think there is more in common?
Ms. Schaldemose: Well, I think at least there is some misunderstandings around the DSA and the content of the DSA. It is correct that we wanted to empower users of the platforms so that – in case, for instance, Meta wanted to take down a post, they should argue why and you would have the right to complain in case you didn’t agree. But maybe I could start another place because I think that there might be some misunderstandings also about how much is taken down.
We have also in Europe a lot of discussions about freedom of speech, and some citizens are claiming that the freedom of speech is under pressure because of European laws. But honestly, I think that the pressure comes not from European laws but maybe from the community standards from the social media platforms, because the DSA allows, you know, very, very broad content, and it’s only with systemic risk that they have to act, the platforms, and they might not necessarily take things down unless it’s illegal. They could downgrade it. But the thing is that the community standards for many of the platforms were very narrow. So the DSA is here, but what, for instance, Facebook and other – it’s not to name and shame, but you know, some of the media – social media platforms have done. So they have taken down a lot of content, so maybe people believe that that content was taken down because of EU law, but that was not the case.
Dr. Caroli: That’s very interesting. It could actually be the fault of the platforms themselves that freedom of speech is somehow limited.
Ms. Schaldemose: But, Laura, I just want to say that I think that if you look at the history, I’m educated as a historian. If you look at the history of the Western societies, at least, you have never, ever had a possibility for so many people with such a small effort to reach such a broad audience as you can today. And you can say almost whatever.
So the freedom of speech has never, ever been more lively. It has never, ever been easier to come out and express your point of views. So I think we also need to remember that in the past, if you wanted to come with a comment, you had to go through an editor at a newspaper or in television, and it was very difficult to get a chance to say something. Or you could stand in the streets and demonstrate, but, again, the audience would be much smaller than what it is online. So I honestly believe that we should not be afraid of freedom of speech. It is very lively. And people are taking part in democracy.
If I have a concern, it is not – it is not so much about, you know, platforms restricting people, because they are not. And the laws, at least, are not doing it. It might more be that some groups of people tend to not really wanting to participate in the political debates taking place online, because of, you know, the way it takes place. You know, the tone, for instance, for some people is not nice. I know from Denmark, at least, we have surveys telling that a lot of women don’t want to take part of the democratic debates online because they are accused of a lot of things and, you know, told – named different unpleasant things.
So I think that is also part of a discussion about freedom of speech. So one thing is what you can say, but if you don’t create a framework where it’s possible for people to actively participate –
Dr. Caroli: And to feel safe.
Ms. Schaldemose: And to feel safe, then it is also a threat. So we need to balance. But, honestly, I don’t think that we, in the modern history, have had better opportunities to speak freely than we can now, at least in the Western part of the world.
Dr. Caroli: Thank you for this. It’s actually, I think, very visual, the picture you have painted. Like, freedom of expression is not only about censorship and what you can say and what you cannot say, but also how many people you can have access to, and whether or not you feel safe to say something. So these are very, very interesting elements. And this –
Ms. Schaldemose: But maybe – because, I think, what we – why we had these discussions, and why we wanted to do something in Europe, was also because we talked about the business models of the platforms. And maybe also here we have different approaches if you’re an American or if you’re a European citizen, because the business model for many of the big social media platforms, and not just them, is about earning money. And I think that’s great. We need businesses to do a lot of things. However, if you have a business model that, you know, kind of makes using the algorithms so that the more rude you are, or the more aggressive words you use, the more your posts will be distributed, amplified.
That is, of course, also something we have to consider how that works. I’m not saying that we should avoid people talking like that. I mean, it’s up to people themselves. But I think we need also to respect that in case there is a systemic risk that this is – that this business model destroys democracy. Then then they have to risk-mitigate the platforms.
Dr. Caroli: So, for example, one example of this would be, like, if someone is spreading a very false rumor about someone in a very violent way, if it gets amplified too much it can even have systemic effects, like, on a – on a very large number of people, right? Is this the real problem? Like –
Ms. Schaldemose: Yes. Or you could say, maybe I could use another example which is not directly related to freedom of speech but also what we are trying to regulate. We’re talking about that if there is a systemic risk to public health, then we need to do something. And that came out of the debate, that we had this whistleblower from Facebook, Frances Haugen. She was telling us much about how things worked at Meta at that time. And we saw this risk that, you know, young people search for healthy food and then they ended up in a rabbit hole only hearing about, you know, anorexia, and self-injuries, et cetera.
And then if that’s the case for all young women, or at least a systemic risk that many of them if they search, then they have to risk mitigate, then they have to do something with algorithms so that you don’t end up in this rabbit hole that can cause, you know, health – mental health problems. So we are also trying to take care of people. So you could say that shouldn’t people have a right to look at anorexia or self-injuries? Well, it depends on who you are. And I want, and we want, to protect at least kids and young people. But if there is this risk, a systemic risk because of the algorithms, then they had to risk mitigate. So we are not saying that they should get rid of, you know, the content, but we are saying that they need to be sure not to, you know, post it and recommend it to young people.
Dr. Caroli: Yeah, so not amplify it over and over.
Ms. Schaldemose: Not amplify it to young people, for instance, young women.
Dr. Caroli: But a bit less, maybe.
Ms. Schaldemose: Yeah. So with a systemic risk, they should risk mitigate.
And I think that’s also part of this creating a safe environment. So we are not saying that the platforms should take this content down unless it is illegal, but we’re saying that they need to be very much aware on how they – what kind of recommending systems they have, who they recommend posts for. And I think we all know this, that looking at social media within a few seconds the system, their algorithm knows what you’re interested in and you are getting recommended a lot of these things. And most of it is completely safe and only a question of your individual interests, which is fine. But as long as you not end up in a situation where you risk public health, for instance, or that you create problems for democracy, et cetera. So we also had these discussions about COVID and recommendations about avoiding vaccines, et cetera. So, I mean, we all know these are examples.
Dr. Caroli: Yes. So, indeed, we have talked about so much, like disinformation, mental health. These are very big systemic risks.
Talking about disinformation, first of all, do you think that now with AI the risks of disinformation have been even more amplified, and how? Like, are you working on this in parliament at the moment?
Ms. Schaldemose: Yeah. Well, I do. We have created a new committee in the European Parliament. We call it the Committee for Democracy Shield. The president of the European Commission suggested it to protect our democracy, amongst others against foreign interference in electoral processes and in democracy. And we are looking into how we can make sure that our democracy in Europe is stronger than it might be today. I mean, we don’t have huge problems, but we have some problems we need to look into. So, do we have enough media so that we have a real media freedom? Do we have media literacy? Have we resilient electoral processes? Et cetera. And I think this is very interesting what we’re going to work on there. So we’re going to look into how this works.
And disinformation, as you ask about, or foreign interference, yes, I am a bit worried, maybe mostly because of AI, because it’s so easy and it becomes so good, the AI systems, the foundation models, et cetera, that, you know, you can make things that people cannot see is not true. And most of it is not harmful at all. It can even be used for good purposes. But if you want to send out some fake news, then there is a risk because in a democracy trust is one of the key elements. And if you cannot trust what you’re seeing and reading online, then there is a fear. So, therefore, we need to make sure that we have systems in place so that we can keep on trusting what we are reading, because that’s so important for democracy.
Dr. Caroli: Yes, indeed. And this trust is really fundamental, and I think deepfakes even make it possible to deny facts that are true because claiming they are fake, generated by AI, it’s what some researchers labeled as the liar’s dividend. So I don’t know, some politician can say – can see a real video of himself saying something and say, no, no, no, this is a deepfake; I never said that. And nobody can actually contradict this person, you know, because of this. This is what’s scary. But –
Ms. Schaldemose: Exactly. And there is some risk that you’re completely confused and you end up not trusting anything, not even the right things – you know, not even that that is not fake – because you don’t know whether it’s fake or not. So in that respect, I am really worried. It is important that we have measures in place to make sure that medias – social media platforms, they do their part to help us navigate in this. So labeling, for instance, deepfake, do an effort to get rid of posts that are real foreign interference and real disinformation, you know, made to harm. So – but I know that there is a discussion about does disinformation exist and will it not just be used to get rid of your political opponents and things like that? And that’s not what I’m talking about here. I’m talking about real disinformation that is dangerous. And it’s not up to me to judge, but that’s why we try to keep this arm-length principle in place, so that we ask the platforms to do an effort. They know, because they have systems in place, they even use AI to help. So they can also do it.
Just to let you know a very small example, we have made a system in the European Parliament to help us politicians so we can check a photo.
Dr. Caroli: That’s very interesting.
Ms. Schaldemose: Is this a fake photo or is it a real photo? And sometimes it’s – I don’t mind AI-generated photos, but people need to know so that they know what is a real photo and what is AI-generated. So we try to help us so that we can see – we have seen some examples of politicians being misused with fake photos. So we will try to help. And that’s what we can do. And that’s why I also ask the platforms to take on their responsibility for this.
Dr. Caroli: Yes. And, indeed, we also have a provision in the EU AI Act about this, about making AI-generated content labeled as such. And this will apply from August next year – of next year.
Ms. Schaldemose: Twenty-six, yeah.
Dr. Caroli: Yes, because there is still no established way to do it. So there are different methods going on. And there is no standard yet. But this, I hope, will help a bit the situation. But talking about platform responsibilities, so they have to be aware of these methods, and check, and take an active role. I wanted to talk a bit about Meta. So Meta has signed the EU Code of Practice on Disinformation, which is now – it’s a code of conduct, maybe you can explain better, that has been integrated now in the Digital Services Act, right? But at the same time – so they seem, like, committed in making this effort that you were talking about in Europe to take an active role in mitigating these risks. But at the same time, we’ve heard Zuckerberg in January saying he will abandon – Meta will abandon factchecking. So how do these two things – (laughs) – match together?
Ms. Schaldemose: This code is still a voluntary code. And it is mentioned as one possible tool in the DSA, but the platform states they do have the right to find the measures – we’re trying not to tell them what kind of measures they need to use when they have to risk mitigate. So they have the freedom of choosing the methods. But factchecking is, of course, a very good tool. So if they don’t use that, then they have to use other things that have the same kind of impact. But it is very important for us in the DSA, or it was important, to say that they should, you know, do it – the ways to risk mitigate was not something we decided. We gave them different options. And this code is a voluntary code. They have signed it. So in that respect, they are kind of obliged. But they can withdraw that signature if they want to.
But before we say that it is a huge problem, I want to see the impact of the other measures they put in place. And there is a dialog between the European Commission and Meta on these questions. So we also need to be – since we are saying that we will not directly interfere in how they do it, as long as they are doing something to risk mitigate, I think we also need to have this, you know, freedom of finding how it works. But we will follow it closely because we need to make sure that then they do other things instead. And now they have suggested to use the same method as X, these community notes. And, well. Yeah, we will see how that will work.
Dr. Caroli: Yes, because, indeed, I mean, it’s very interesting now to have it clear that companies’ platforms are not obliged to fact check anything. Like, this is just one of the measures that they can adopt.
This is very helpful to understand, and this code I know that once you sign normally you are committed to it. There is also a code of practice that’s being – that is being now drafted for the AI Act. So we are all over this debate and in these current months.
But at the same time, and also Elon Musk, when he took over X – now X – he withdrew from the code of practice on misinformation. So –
Ms. Schaldemose: But he’s still obliged to make impact assessments and he still had to follow the situation when it comes to disinformation because as long as X is a very large online platform, and it is, then he will have to make impact assessment.
He will have to measure and look after whether there is a systemic risk for disinformation, and in case there is, at least in Europe – I don’t know, we are not regulating the United States – but in Europe then he will have to risk mitigate.
So he will also have to find measures, and if these community notes are not enough then he will have to do other things. And it is looked – after the commission, who is – the EU Commission is the enforcer of the Digital Services act for the large – very large online platforms and they are looking into this so they are making sure that the rules are enforced.
Dr. Caroli: So, indeed, now we come to the crux of the problem. So, first of all, thank you for clarifying that companies can do whatever they want in the United States. They just need to apply these rules when they operate in Europe, right? Because nobody can regulate from another jurisdiction.
But what triggers a fine under the Digital Services Act? And, like, I hear some people are concerned that companies can even be banned from operating in the EU due to the Digital Services Act. Is this true?
Ms. Schaldemose: Well, no, not really. It is a very theoretical option and as a last, last, last resort and only if the courts are saying yes to it. You know, no one really believes that that will ever, ever happen.
But it’s interesting that American companies are so afraid of that theoretical risk since U.S. politicians decided to ban TikTok. So, yeah, but we are really trying to avoid this but it’s – in fact, that’s interesting because what we did with the DSA and the DMA was that we really wanted to have heavy fines.
Dr. Caroli: Sorry. Sorry. Sorry. Let me explain. Like, the DMA is the Digital Markets Act, right? Which is in that regulating competition aspects of big companies.
Ms. Schaldemose: Yeah. Sorry, I shouldn’t have mentioned that because –
Dr. Caroli: No. No. Sorry.
Ms. Schaldemose: But a couple of bigger digital laws in EU lately we have, in fact, tried to learn from the United States so that, you know, the fines should be so big that it could better – you know, it would be better for business to try to comply with the laws, avoid – to try to comply with the law so that it kind of – I can’t remember the English word for this but, you know, that it should prevent them from not following the law, which is what they – is done also here in the United States and –
Dr. Caroli: It should be dissuasive.
Ms. Schaldemose: Yeah, exactly the – exactly. That was – thank you very much.
And we did that on purpose because we think that it works here in the United States so we tried to do the same. Unfortunately, it has not had that impact when we look at the GDPR, which is the data privacy law in EU. So we have seen some huge fines there, but we hope that it will prevent bad things from happening in the Digital Services Act.
So big fines is, in fact, our best tool. We will not ban platforms in EU, at least not American platforms, I would say.
Dr. Caroli: Thank you for clarifying this. I think a lot of companies here will be relieved that there is no such risk, at least not in the short term.
Ms. Schaldemose: But maybe, Laura, I could also say that there is no risk as long as – there is no risk, not even for a fine, as long as you’re complying with the rules. If you make services available in Europe – and I’m really happy that they do because Europeans love American services. We use them all. We buy a lot. I mean, we are rather dependent on American technology because it’s good, and therefore we’re happy that they make services in EU. But these companies, as long as they’re complying with the rules of EU, they should not be afraid. So it’s very easy to avoid getting a fine: Comply with the rules. Easy.
Dr. Caroli: And it’s also interesting that you say that we learned this from the U.S., so it’s very interesting that you are taking the example of the United States in drafting the legislation. So this, I think, is a very powerful message in this space.
Ms. Schaldemose: And I also think that many people don’t really know that, but we have, in fact, a real good – we had, in fact, real good transatlantic legislative dialogues. We have had a lot of meetings/discussions, transatlantic, on legislation, and we really have tried to learn from each other. And as I understand it, even in the Congress here in the United States they were very close to making a law that kind of was inspired by the Digital Services Act – of course not a complete copy of it, but where there was a wish to ban the Section 230, for instance. And that had a large support in the Congress, but then the election came and then that ended.
So, I mean, we have a dialogue. We try to work together. We learn from each other. And I definitely think that we can learn a lot of things from the United States way of doing things. And we’re looking at simplifying our rules a little bit and make it easier to do business in Europe, and here I think we can be inspired by the United States. And hopefully, United States can be inspired from Europe in how we protect users and societies.
Dr. Caroli: So the picture you’re painting so far doesn’t look like Europe is a monster –
Ms. Schaldemose: (Laughs.)
Dr. Caroli: of censorship and obscure, very hard laws. It looks like rather safe, free space where anybody can say whatever they want and just comply with the law, which is common sense, as the popular expression right now says, as they would be here. Like, if you comply with the law, nothing will happen to you, right? Is –
Ms. Schaldemose: That’s the same in Europe. Nothing will happen to you if you comply with the law. And honestly, I will not say that there couldn’t be one or two or three rules that could be simplified. Probably we could because we have a tendency to make big laws.
Dr. Caroli: Yes.
Ms. Schaldemose: And on that, we will probably also have to look into that in the future and do it better. But still, the DSA – the Digital Services Act – for instance, is not a very complicated law, and I think we have not created a monster. We haven’t made restrictions on freedom of speech. We have made a rulebook so that we can have a fair, level playing field, and in that protecting consumers as well, as we always do. So I hope that, you know, this understanding maybe could spread a little. Maybe there are also some who has an interest in giving an opposite picture of what happens in Europe at the moment, which is a little bit sad.
Dr. Caroli: It is sad, indeed.
And coming to another recent event from the U.S. administration, there has been this White House memorandum on “Defending American Companies and Innovators From Overseas Extortion and Unfair Fines and Penalties,” saying that the administration will impose retaliatory measures such as tariffs to foreign countries who levy fines on American companies, impeding them from doing their business. So you’re here in D.C. this week to meet several members of government – Congress and several other officials. What is the message that you want to convey in light of these events?
Ms. Schaldemose: Well, I still believe that EU and U.S. shares a lot of values on how to have a society – rule-based, democratic, open. And I really hope that we can keep on having a dialog with each other without shouting at each other, without threatening each other, but we can have a dialog. And there are definitely things we could do better in Europe, like here in the United States. But instead of threatening with tariffs, we should – and whatever – we should talk to each other and see how we can solve potential conflicts. And that’s my message. I think we should do that instead of threatening.
But there’s no doubt that if President Trump is imposing tariffs on EU we will answer back and put on tariffs on American products. But it’s sad and it’s unnecessary. And in the end the only one who pays for that is the consumers. So it is – it becomes more costly in a time where people really need us to make sure that products are not becoming more expensive. So I find it stupid and unreasonable. And why should ordinary citizens pay for that? So I hope we can find solutions.
Dr. Caroli: Well, thank you for recalling us that we actually share so much, between Europe and the United States, and there is still so much that we could do in terms of continuing our dialog and learning from each other, as we have seen with the DSA itself. And also how, in the end, threatening tariffs only has repercussions on consumers. So only on individual citizens that abide by the law and are forced to pay the price of this tension. So I will – I really hope that we will be able to solve them, but I think that today’s conversation helps a lot because you helped us really clarify a lot of the elements of this law that now – until now, has been seen almost as an obscure body that is threatening the U.S., or something. Thank you so much.
Ms. Schaldemose: Can I – can I add one thing?
Dr. Caroli: Please. Please.
Ms. Schaldemose: Because – (coughs) – sorry. When we adopted the law, and the whole process, as I told you in the beginning, it lasted almost a year, I had many opportunities to exchange views also with American companies and platforms. And honestly, at that time they all welcomed the DSA. So therefore, it’s a bit interesting to see the different positions now. Sorry. (Coughs.)
Dr. Caroli: Well, that’s very – (laughs) – a very clear statement.
So with this, I think we can conclude our event. I want really to thank you so much, Ms. Schaldemose, for coming to speak to us at CSIS. Again, this concludes our event. Thank you for watching. And please keep in touch by checking our website at CSIS.org, and subscribe to the AI Policy podcast.
(END.)