Social Media Platforms Were Not Ready for Hamas Misinformation

Millions of internet users flocked to X (formerly known as Twitter), Facebook, Instagram, and TikTok on October 7 to monitor real-time developments related to Hamas’s attack on Israel. What they found—especially on X—looked noticeably different compared to previous global crises. One video, which claimed to depict a Hamas soldier shooting down an Israeli helicopter, was borrowed from the video game Arma 3. A second viral clip, which alleged to portray Israeli airstrikes in Gaza, more likely displayed a firework show following a soccer match in Algeria. Numerous accounts—some with pseudonyms, others posing as fake news agencies—misrepresented imagery from past months or inaccurate geographic locations as current portrayals of the violence in either Israel or Gaza. Falsified reports of U.S. policy actions—evacuating the U.S. embassy in Lebanon, allocating $8 billion to aid Israel’s defense—circulated as well.

Although many journalists attempted to post links to reliable, fact-checked reporting throughout the weekend, X readers had to parse through a significant amount of misinformation, conspiracy theories, and nonsensical spam to find them. The influx of bots hurt journalists too, making it harder to discern legitimate on-the-ground sources from unsubstantiated or debunked ones. But the worst content was actually real: Hamas reportedly took advantage of the chaos to plant violent and graphic images on X and Telegram, which follows previous trends of extremist organizations exploiting international attention to further their cause or spread a message. Although X claimed to block some Hamas-affiliated accounts, many internet users reposted those images, sometimes accompanied by out-of-context or inflammatory captions, allowing graphic or harmful material to spread in a more decentralized fashion.

False or violent online content is not a new problem, but it reached extreme heights on X this past weekend. X—once a popular destination for real-time news—functions completely differently today than it did 20 months ago, when Russia invaded Ukraine. When Elon Musk assumed leadership in late 2022, he enacted numerous policy changes that amplified the spread of false, harmful, and inflammatory content. Almost immediately, Musk terminated a large share of content moderation employees, shut down the advisory Trust and Safety Council, and reinstated a number of accounts that were previously banned for spreading hate speech. He then proceeded to revamp the verification system in ways that allow bad actors to appear legitimate, which enables fraud, impersonation, and extremism. In addition, he stopped labeling accounts affiliated with Iran, Russia, and China state media—many of which weighed in on Hamas and Israel this past weekend—and removed headlines from all news articles on the platform. Going further, Musk withdrew X from the European Union’s voluntary Code of Practice on Disinformation, reneging on the company’s previous pledge—under former CEO Jack Dorsey—to uphold transparency standards, demonetize disinformation, and improve media literacy across the bloc.

Meanwhile, Facebook, Instagram, TikTok, and YouTube internal policies ban Hamas accounts on paper, although these platforms have generally struggled to handle real-time crises in practice. Due to the large volume of content that appears online—likely tens of millions of posts per day—platforms often rely on automated systems to flag harmful words or phrases. However, current algorithms are notoriously imperfect; they lack a human understanding of the cultural contexts, nuances, and meanings behind word patterns or associations. Their accuracy rates further decline when faced with photos or videos, real-time livestreams, and non-English languages. Despite these known technical challenges, Meta and YouTube both engaged in widespread layoffs of their trust and safety workers earlier this year, deepening their reliance on algorithmic methods. Although Meta stated on October 10 that it is employing Hebrew and Arabic content reviewers amid the ongoing conflict, their exact investment is unclear—as of 2020, 60 percent of Arabic-language content remained a black box to the company. In 2021, whistleblower Frances Haugen revealed that Meta spent 87 percent of misinformation resources on English-language content, even though only 9 percent of its user base primarily speaks English.

Social media platforms have built their business models around targeting advertisements to users, and many have designed their algorithms to maximize activity. For this reason, it is possible for extremely shocking or polarizing content to go viral online within a matter of minutes or hours, whether or not a platform eventually detects and removes it. TikTok, for example, automatically suggests videos based on users’ predicted interests—allowing those with extremist viewpoints or political affiliations to remain within their filter bubbles. This algorithmic design also allows sensationalist content to easily gain visibility, driven by viewership instead of accuracy. Facebook, in contrast, modified its news feed algorithm in 2018 to stop prioritizing verified news organizations and start boosting friends and family content from individuals’ networks. However, this method had unintended consequences as well: people are more likely to follow and believe those who share similar viewpoints, which reinforces polarization and engagement with misinformation. These decisions caught up with social media platforms this weekend, when countless users interacted with false or harmful videos while information shared by legitimate news or government organizations was obscured.

The rise in misinformation on Israel and Palestine comes as social media is becoming a less welcoming environment for news organizations—even as journalists play a more critical role than ever. In particular, Meta is undergoing a major shift in its relationship with the news media amid recent laws like Canada’s Online News Act, Australia’s News Media Bargaining Code, and the European Union’s Copyright Directive, which aim to require qualifying technology platforms to pay to host news articles. Facebook and Instagram started to block news links and content in Canada in August 2023, stating—controversially—that they do not receive material benefits from hosting them. Even after Threads (Meta’s new text-based app, widely seen as a challenger to X) received a spike in users following this weekend’s online chaos, company executives doubled down on statements that it would not actively recommend news articles to users.

Hamas’s invasion occurred just two weeks after the European Union’s Digital Services Act (DSA) came into effect on August 25 for very large online platforms (VLOPs) and is thus a major test for technology regulators. Shortly after the October 7 attack, Commissioner Thierry Breton sent public letters to X, Meta, and TikTok demanding answers on their compliance with the DSA’s transparency, notice-and-action, and public safety mandates. In particular, the DSA requires VLOPs to publicly explain how their content moderation algorithms work, act on user complaints of illegal content, and mitigate “societal and economic” risks to fundamental rights in their design. If the European Commission determines that a digital platform has violated these rules, it could issue fines of up to 6 percent of its global annual turnover. Meanwhile, the UK Parliament approved the landmark Online Safety Bill on September 19—which, after receiving royal assent, will require user-to-user platforms to proactively detect and remove content that is illegal or harmful to children, including any promotion of terrorism.

It is too early to tell how the DSA and Online Safety Bill might affect the online discourse surrounding Israel and Palestine or future conflicts. While these laws aim to prevent illegal activities on the internet, that category does not always encompass mis- or disinformation. Due to free speech concerns, the UK Parliament chose to exclude disinformation from the final version of the Online Safety Bill, and the DSA largely leaves false content to the European Union’s voluntary Code of Practice on Disinformation. Even if platforms had more advanced technologies and unlimited resources to root out specific messages or posts, there is often no universal consensus on when or how to classify online expression as “misinformation,” “disinformation,” or “extremism,” especially in a rapidly developing crisis. While many types of content clearly violate either laws or companies’ terms of service, others may fall in a more subjective space between political speech and harm. For example, social media executives have previously split over whether to allow political leaders to express harmful sentiments “if it is newsworthy and in the public interest.” They face difficult trade-offs between mitigating the spread of untrue content and allowing information access and political expression to flourish—particularly on a mass scale and under time pressure.

Social media platforms were not prepared to handle the flood of false and harmful content surrounding the Hamas attack. To avoid further descending into chaos, technology companies need to significantly upgrade content moderation algorithms, scale user flagging systems, expand cultural and language competency, and ramp up overall staffing levels. But these technical investments could take years, and even then, there will always be unresolved questions over the legal or ethical boundaries of content moderation. In the short term, the best option to seek reliable information about current events might be the low-tech one: turn off social media and pick up a local newspaper.

Caitlin Chin-Rothmann is a fellow with the Strategic Technologies Program at the Center for Strategic and International Studies in Washington, D.C.

Image
Caitlin Chin

Caitlin Chin-Rothmann

Former Fellow, Strategic Technologies Program