Alessandro Accorsi: Disinformation Warfare in the Middle East

Available Downloads
This transcript is from a CSIS podcast published on February 13, 2025. Listen to the podcast here.
Jon Alterman: Who is active promoting disinformation in the Middle East, and how are they doing it?
Alessandro Accorsi: In terms of actors, there are several. There are regional actors. There are international actors.
For decades, the information ecosystem in the region has been closed and tightly controlled, either by countries internally or by external powers that had regional aspirations—Nasser first with the radio, the Saudi channels, other regional channels, et cetera. Social media proved to be a very cheap and useful way of achieving that goal. Then different countries had different capacities, and they also had different goals. Egypt, for example, never really developed capacities to use their influence operations outside of the Egyptian ecosystem. For them, the goal was always to try and control the domestic space, while other countries, like the Gulf countries, most notably, used influence operations to expand and extend their regional hegemony.
You had the Qatar rift back in 2017, which some dubbed “the first social media cold war,” in the sense that no shots were fired, but it was only fought online and through the media. You have the Libyan civil war, or at least the Tripoli offensive, where Turkey, Saudi Arabia, the UAE, and partly Egypt, used influence operations on a very tactical level during the offensive. Then, obviously, you have more recent examples like the wars in Gaza, both in 2021 and 2023.
Jon Alterman: You said that it's partly state actors, but also non-state actors are in this space?
Alessandro Accorsi: It's mostly state actors. Then, sometimes, you have a few non-state actors. The best example of a non-state actor is ISIS, its use of social media for propaganda and for recruitment. You also have other recent examples where we suspect that there are state actors behind certain operations, such as those conducted within the context of Morocco-Algeria tensions.
At one point, weirdly enough, in the fringes of the internet and social media of the two countries, there was this emergence of very small accounts that dubbed themselves “the Moorish movement.” They were actually very much inspired by the American alt-right. They used the same tropes, the same memes of Pepe the Frog and other things that were really taken from the American alt-right, and they were repurposed in the Moroccan context first, and then we saw a reaction from Algeria. They were trying to push a very right-wing, hyper-nationalist, even ethnic supremacist approach where they would really try to troll other users—try to instigate a climate of hyper-nationalism in the country.
It started in Morocco, and the response came from Algeria. It was very short-lived, but somehow it created a little bit of additional tension in the relationship between the two countries.
Jon Alterman: Do you see disinformation and misinformation as very distinct things?
Alessandro Accorsi: They are distinct things. The difference is how deliberate it is. The way I usually explain it is by saying misinformation is your aunt sending a WhatsApp message with something that might be fake or misleading. Disinformation is writing something that we know to be fake and misleading and circulating it on purpose with the intent of deceiving. The two obviously co-exist together and disinformation feeds off misinformation. Usually, there are narratives that can be introduced in an information environment that are able to circulate because we have a general misinformation problem.
Jon Alterman: What kinds of impacts do you see disinformation having—especially in terms of discrete impacts, and how often do you see them being the intended impacts?
Alessandro Accorsi: Disinformation had different goals and different uses over the years. As I mentioned before, at one point, Gulf countries used to use it for widening their regional hegemony and regional power. It was very interesting because we could see by tracking networks of trolls and electronic armies linked to Saudi Arabia or the UAE where the policy line was going. The official discourse coming from the states would go in one direction, but then online we saw that certain narratives, different ones, were being circulated and were preparing the ground for a shift in the policy direction.
To give you an example, when OPEC decided to raise the price of oil, the official explanation from Saudi Arabia was that this had nothing to do with tensions between Saudi Arabia and the United States, and this was only done for economic purposes. Online, the trolls writing in Arabic were pushing a very anti-United States line and saying other things, saying that this was done in contraposition to the Biden administration and other things. We've seen this kind of game played several times where, for example, around the Abraham Accords, we saw the same things, also in relation to normalization with Israel in general and intra-Palestinian politics.
Jon Alterman: Why does this matter in non-democratic societies where public opinion doesn't shape government action?
Alessandro Accorsi: It doesn't matter in terms of public consent, but it does matter in terms of stability, especially in countries that are either in conflict or at risk of conflict. Especially in countries that are experiencing an active conflict, this matters a lot because civilians rely on information to make life-or-death decisions. If we take Sudan, for example, or Gaza, people fleeing violence rely on sources of information to decide where to flee, where to go, where to receive aid, where to gather, but also, they rely on technology, not only on social media, to receive aid.
We push for the digitalization of aid in terms of humanitarian planning, delivery. We collect way more data now, everything is digital, and this has consequences on civilians. The information operations can aim to confuse people by slowing down, for example—confusing them so that they flee in the wrong direction, putting them at risk, but also getting them in harm's way. Like slowing down, for example, the advancement of an armed force. We saw that a lot during the Tripoli offensive. We've seen that in Sudan. It can also impact the policy responses and humanitarian aid responses, including emergency responses.
We've seen this in Gaza. If you have campaigns targeting UNRWA, campaigns targeting NGOs and targeting civilian facilities, it becomes, first, less safe for emergency responders to deliver aid. It becomes much harder to gather information on the needs of the population and what is needed and how to deliver it, and it becomes much harder for civilians to receive that kind of aid.
Jon Alterman: I can imagine why governments would be interested in maybe passively whipping up some nationalist support in the face of a shift in oil production. I have a harder time understanding why governments would want to put more civilians at risk in the midst of domestic conflicts.
Alessandro Accorsi: It's not always the intention to put more civilians at risk. Sometimes it can be. What's clear and it's becoming more and more clear, is that information operations are a very cheap tool in war and conflict. At best, they can be an insurance policy to undermine the stability of a country, to prolong conflict, to undermine peace resolution and mediation efforts. It's also a cheap way to prop up the image of one party in the conflict compared to the other, to create areas of influence, to prop up new players. It has an unintended consequence most of the time—the fact that civilians are put in harm's way. Information operations, information warfare, electronic warfare, the use of internet shutdowns, for example, and communication shutdowns are increasingly used as a tool of war.
Usually, it is thought that they help on the military battlefield and that they're used for military reasons. In reality, their impact on military operations is insignificant, but they have huge consequences on civilian lives. Then, obviously, in the post-conflict situation, as I said, it's an insurance policy. It's a way of undermining the stability of a country, of exerting some influence to control the space.
Jon Alterman: Russians are very active in this space. They've been active in this space for a long time. One of the places they've been active is Syria. As you look at it, were the Russians successful in their information operations despite having a failed Syria policy, or was the failure of their information operations part of a broader failure of policy, or were the information operations just too marginal to actually tip things decisively one way or the other in a conflict like Syria?
Alessandro Accorsi: Information operations alone cannot win conflicts. If you have a bad policy, or you don't have enough resources, they're not going to win a conflict. They can be very helpful for tactical uses in trying to gain a little bit of advantage. Information operations were instrumental for Russia in Syria in the period when Russia made Syria a priority in their foreign policy, where they intervened in support of the Assad regime.
They created enough space to use chemical attacks, not only once, but possibly multiple times, to bomb civilians in Aleppo, to commit atrocities, and to also keep some space for the Syrian regime to survive longer than it probably could have. 60 days ago, before the Assad regime fell, we were talking about normalization with Syria. Everyone was taking for granted that Syria was a safe place for refugees to return, part of one of the Russian narratives, that the regime was not worse than the alternatives, and that it was done—that Assad could just waltz back into meetings with foreign powers, and the regime and Russia had won the war.
They were successful in hiding prolonged instances of resistance throughout the years, even in regime-held areas. When people protested, for example, against the cut of subsidies on bread, in several instances when there was dissent inside regime-held areas, they were very successful in playing those concerns down and projecting a certain image. We see now what happened. Syria is facing a transition where the fall of the Assad regime did not only leave a vacuum of power, security, and governance, it also left an information vacuum.
The Assad regime, together with Russia and Iran, had created an information “iron dome” that shielded the population that was under its control from narratives that would come from outside. Over 14 years, the Syrian opposition created a strong civil society, strong independent media, but all in the diaspora, or in Idlib and the northeast. The population inside the regime-held areas, they were, at the end, told every day—they were bombarded with propaganda. They were told that the opposition were just terrorists. That created sectarian tensions, and the fragmentation of the information iron dome left a large part of the Syrian population primed for misinformation and disinformation. There's no real old regime media that is surviving and is operating.
The fracturing of that ecosystem led to many domestic and international actors, including Russia, being able to run their own influence operations to fuel hatred, to fuel incitement, to fuel sectarian tensions, and, at best, use them as an insurance policy to make sure that the political transition is unstable, and, at worst, actually foment a catalytic event that can descend the country into sectarian violence and a new civil war.
Jon Alterman: As somebody who first went to Syria in 1991, I think the iron dome long predated the uprising of 2011. It may have been fine-tuned, but certainly the distrust that Syrians had for any information source was cultivated by the government for many, many decades.
Alessandro Accorsi: Oh, yes. The regime was in place since the '50s. This information iron dome has been long there. I remember I went to Syria in 2008, and I remember at one point, one night, a friend of mine finally decided to lock himself with me in my room, asked me to take all the blankets we had in the house and cover our heads and hide in there, and started whispering, he started talking politics. The first thing he told me was, "Now, next time you see all the friends you meet here, check the shoes. If they're polished, they're Mukhabarat," they're information estate agents. They were everywhere; they were controlling everything.
Jon Alterman: Is there a difference between the kinds of information operations in chronic cases versus acute cases? I'm thinking of the immediate uproar in the Arab world in October 2023 when there was a claim that Israel had bombed al-Ahli Hospital in Gaza. Shortly thereafter, it came out that it was unlikely to be an Israeli bomb. By then, the narrative had changed and it really flipped the Arab world and got Arab publics engaged suddenly. Does acute disinformation or misinformation work differently than the chronic, multi-decade drip-drip that we've just been talking about?
Alessandro Accorsi: Definitely. Partly because of how our attention span works, partly because of how social media works. There's a short news cycle. We are fed with trending information, so the news of the day or the news of the week dominates public conversation and can often trickle down into different echo chambers and bubbles that reach people that are not even on social media. The ongoing Gaza war has been very different from previous wars in Gaza.
In previous wars, starting from 2014, when social media first started playing a role, what happened was that Israel started massive bombing campaigns, applying their “Dahiya doctrine,” and Palestinians on the ground in Gaza turned to social media to document what was happening and share it with the rest of the world. After nearly two months of war, President Obama decided that it was enough, and the war had to stop. From then, we saw changes that the Israeli state did to their strategic communications and influence operations. They understood that from a military point of view, they had to make changes to contest that space. The IDF opened a YouTube channel. They had opened it before, but they started being more active. They started to post the videos that we've grown very familiar with, such as drone footage that displays a hospital or mosque being bombed with an explanation of: “This is a terrorist base or it's a base for Hamas or where they store weapons.”
At the same time, they also started doing what has been called “digital militarism” inside Israel, trying to spread militarism and nationalism using social media, especially Instagram, to create a critical mass of people who could then be mobilized online to counter and contest this space.
We saw this again in 2021, to the point that the failure of platforms to adequately moderate content and conversations online led, on one side, to over-enforcement, to the point that the hashtag “Sheikh Jarrah” around the protests, that preceded the war in Gaza, was effectively banned from Facebook and Instagram, and millions of Palestinians and people in the Arab world lost trust in the platform.
On the other side, the lack of moderation of content in Hebrew led to the circulation of all sorts of misinformation and incitement, especially among WhatsApp groups but also private Facebook groups, that contributed to escalating tensions inside Israel-proper and mob violence in mixed communities against Palestinian citizens of Israel.
In the first three months of the ongoing war, Israel did not try to contest the center. They did not go for moderate public international opinion, despite the horrors of October 7. Instead, they tried to create two different echo chambers, and everyone was forced to take side. You were either on the side of the Palestinians, hence you were a supporter of Hamas and all of that, or you were standing with Israel.
Incidents like the al-Ahli Hospital became a battle of narratives, and for every single incident that happened afterward it was very hard to have proper investigations on the ground, because access was restricted. For every fact, there was an alternative version of events. If we think of the so-called “Flour Massacre,” in February 2024—Palestinians were approaching a humanitarian convoy to get flour, and they were attacked. Israel says that the Palestinians were trying to attack the soldiers that were protecting the convoy. Shots were fired in the air, and as a result, it was a stampede and people died.
Palestinians and human rights organizations claimed that the people who died did not die because of the stampede, rather, they were shot, and that Israeli soldiers opened fire. The moment the incident happened, and news started circulating, there were accounts online that already started planting the seeds of an alternative version of events. They used the same feel and look of open-source intelligence investigations, so drone footage, edited footage, analysis, geolocation, all that kind of stuff, to present a different version of events.
What happened by separating people in two echo chambers is that, when people are bombarded by contrasting information on topics that are extremely complex to make sense of, people retreat to their own confirmation biases. That's what happened. Everyone went back to believing the facts that they wanted to believe. Everyone went back to their own confirmation biases, and that created a complete separation in narrative that created enough space, at least until February, for the continuation of the war.
Jon Alterman: Do you see Hamas and Hamas supporters also engaging in misinformation, and do you think, in doing so, they fell into an Israeli trap?
Alessandro Accorsi: Yes, although I would make a distinction. First, Hamas used influence operations from the early morning of October 7. The first images of October 7 and the attacks that Hamas conducted were circulated in private Telegram groups that reached Israeli citizens before their own media could report on them. That was a psychological effect that Hamas had produced and prepared by setting up these groups and trying to have a shock-and-awe operation amplified via social media, via GoPro video that was really meant to delay any Israeli counter-operation and incite a strong response from Israel.
When it comes to pro-Palestinians, I do think that they fell into a trap because we saw, especially at the beginning, a lot of accelerationist groups, a lot of antisemitic groups, a lot of right-wing groups trying to establish a narrative of a false flag attack—that October 7 was a false flag attack, and all that kind of stuff. They also fell into a very easy trap, at least at the beginning, because this was an asymmetric conflict, not only on the ground, but also in the information space. You don't have the same level of organization. It's pro-Palestinian supporters. It's not the Palestinian state. Second, there's an asymmetry in diplomatic and political power. Israel can do “platform diplomacy”. It can actively lobby platforms like YouTube, Facebook, and Twitter to take content down. Israel launched more than a thousand requests for the takedown of content in the first month of the war, whereas Palestinians couldn't do that.
Then there was another problem. The over-enforcement of moderation, in some cases, led to censoring of Palestinian speech. Again, it happened on Instagram, and it happened on Facebook. It led to many pro-Palestinians thinking that they were outnumbered, outplayed, and they were not playing on a level field or according to the same rules. That led to them falling into the trap of using, at times, misinformation and disinformation to try to fight back or steer back the narrative.
Jon Alterman: What kinds of trends do you see developing in the information space in the Middle East?
Alessandro Accorsi: We've seen a shift from state-linked accounts to accounts that are run by small networks, sometimes linked to PR companies. It's harder to attribute their actions to a specific actor. We've seen this, for example, at the beginning of the war in Sudan, with some weird, small coordinated inauthentic behaviors, trying to prop up the profile of the RSF internationally. It seems that these operations were run by a PR company based in Dubai. We don't know enough to attribute it to any government. We often see these kinds of influence operations run by these sorts of networks.
In Syria right now, there's a lot of misinformation, disinformation. Part of it, we suspect, is backed by regional and international powers—Russia, Iran, China, and others. A lot of the networks are domestic, though. It's the result of the fragmentation of the previous networks that made up the iron dome. At first, they went quiet and disappeared, and then they started reemerging. We imagine that that's part of the remnants of the old regime. Some of these operations seem to have the involvement of bots and inauthentic accounts that are linked to either countries in Southeast Asia, which are known to have bot and click-farms, like Vietnam and Indonesia, or accounts that appear to be from Israel and the UAE. Those accounts aren’t necessarily linked to the government of Israel or the government of the UAE. They might be linked to PR companies that are operating the networks for money. That is one big transformation that we're seeing. These operations are becoming smaller, they're becoming more deceptive, harder to pin down to a specific actor.
AI is having an impact on these influence operations, in this gun-for-hire disinformation industry. Everyone felt that generative AI, and the broader AI revolution, was going to lead to deepfakes everywhere and imposter content. We don't actually see that that much, especially because there is no lack of content.
The way AI is having an impact is by allowing this industry to create avatars, to create fake personas that can have an online account and that can be controlled by a single person.
A single person can control thousands of these accounts and each of them have specific preferences—a tone, a style, certain things that they might be doing—generating speech and written content on their own and amplifying certain narratives or otherwise assisting influence operations.
Jon Alterman: I certainly see that the quality of the grammar in the phishing emails I get has improved. This is targeted phishing emails, spear phishing things that come from hostile actors in the Middle East. I get several a year, and they're getting better.
Alessandro Accorsi: That is a huge problem because there is a wealth of personal data that can be mined and that can be used by malign actors to target people: from big operations using AI to target civilians in military operations, to even just stealing personal data for doxing and for exposing people online—to the point where small groups of people can organize to kill or threaten someone. We saw that in Iraq in 2017. There was a huge rise in targeted assassinations with personal data that was circulated online. We've also seen it in other places.
Jon Alterman: What's the policy response to all the things that we've been talking about? Clearly, it's not one thing, because we've talked about a complicated problem, but ICG is known for having a lot of recommendations for a lot of different actors. For somebody listening to this and saying, "Oh my God, we have to do something." What do you think are the most important things to do, especially from an outside perspective? What are the things that really affect Western policy, Western security, and what are the appropriate Western responses to these phenomena?
Alessandro Accorsi: From a Western perspective, we only consider the information space as a critical security liability or threat when it attacks our own interest. We only pay attention when Russia is conducting information operations in Africa, and then everyone freaks out, or against the United States or against Ukraine or against Europe, but we tend not to consider the information space as a security priority.
To me, the information space is a security priority because a lot of the stability of countries, especially countries that are not democratic, is based on the stability of their information space. If you have a culture of independent media, professional media, that will be reflected in the quality of your institutions overall. Information is key, not only to win conflicts, but also to build stability, to govern transitions, to secure countries. It is a security interest that the United States and the EU should have.
Unfortunately, we see that the tide is going in a different direction. Platforms are disengaging, platforms are increasingly abdicating their own social responsibilities. X has completely destroyed everything that Twitter was in terms of content moderation. Meta, for now, is only applying changes in the United States. We'll see what happens in the rest of the world, but it's in a process of de-institutionalizing a culture of safety and content moderation that was built over the past 10 years, not even eight years after the deadly failures in Myanmar, Ethiopia, Cameroon, and other places where, for instance, Facebook’s platform contributed to the genocide of the Rohingya people.
TikTok is trying but is not very transparent. Western countries—usually Western—give some money for independent media, for media support. However, they never view it as a policy priority.
There are a lot of things that can be done. It's not only moderating content on social media and doing it better; it's not only giving money to independent media. It's also applying better standards of protection of data. It's applying better standards of infrastructure, ensuring that internet access is guaranteed, that Gaza will be rebuilt.
Obviously, this is not their main priority for Palestinians today from a donor's perspective, but Gaza should be rebuilt not with a 2G network, but with a functioning at least 4G network that can be independent of Israeli control, that can continue operating at all times, and that can foster a digital economy that can make people safer and help reliable information circulate.
Jon Alterman: Alessandro Accorsi, thank you very much for joining us on Babel.
Alessandro Accorsi: Thanks to you.