Left to Shoulder the Worries of AI

Remote Visualization

Redefining Battlefields in Society

The human body and gender identity continue to be crucial sites for a fact unlikely to change with the rise of artificial intelligence (AI), machine learning–curated images, and deepfakes. The reality is that our bodies are becoming a new battleground online as a mix of state and nonstate actors seek to undermine trust in society. This assault falls heaviest on women and requires the U.S. body politic to meet the challenge by extending legal protection with the bipartisan Disrupt Explicit Forged Images and Non-Consensual Edits Act of 2024 (DEFIANCE Act) against artificially created and nonconsensual explicit material targeting women and children.

Stratified Struggles: Economic Inequities and Digital Threats against Women

Time and again, women bear the brunt of unpaid labor and societal pressures. From child-rearing and adhering to arbitrary beauty standards to facing systemic economic disadvantages, women navigate a labyrinth of challenges. Despite constituting 51 percent of the U.S. workforce, women's earnings stagnate at 82 cents to a man's dollar—a meager increase of two pennies over two decades. This disparity widens when viewed through the lens of racial and ethnic inequity—Black women earn 70 percent and Latina women earn 65 percent of what white men earn. While income represents a significant aspect of socioeconomic inequality, the advent of the internet and social media has intensified challenges related to beauty standards and self-esteem, both of which deeply impact personal identity and well-being. Moreover, these new means of connectivity have also created new avenues for political coercion.

The rise of social media platforms paradoxically increased global interconnectedness during the mid-to-late 2000s while amplifying societal pressures on young women. Specifically, research has shown that social media platforms like Facebook and Instagram impact young women’s mood, satisfaction with their bodies, and perceptions of weight and shape. Social media effectively forms a threat to self-esteem by increasing body image dissatisfaction. One study tested posting selfies and found that this practice led young women to feel more anxious, less confident, and less physically attractive. In addition, damage to self-esteem is also compounded by those who post nonconsensual pornography material, dubbed “revenge pornography,” to further destroy women’s reputations and violate their rights. A survey conducted by the Cyber Civil Rights Initiative found that of revenge porn victims, 90 percent are women, 82 percent suffered significant impairment in social and occupational areas, and 55 percent fear that the professional reputation they have built up could be tarnished even decades into the future.

The Gendered Impact of Digital Deception: Deepfakes

Recently, revenge porn has taken a new form. Known as deepfake pornography or “AI porn,” it is prompt driven and artificially created. This type of digitally altered and generated material gained notoriety in 2017 when actress Gal Gadot's image was placed on that of an adult film actor. This new vector of attack follows an increase in mis-dis-malinformation (MDM) generated by malicious actors. These campaigns aim to distort reality, exert influence over individuals, and destroy reputations and trust both in people and institutions.

For example, one campaign fabricated a “secret meeting” between Nancy Pelosi, Alexandria Ocasio Cortes, and the president-elect during the January 6 insurrection. The deepfake includes a conversation between officials stating: “The public doesn’t fear us anymore and you better do something,” to which one official responded, “this was a mostly peaceful protest,” and ending with “we can’t allow those who showed up to get away with it.” This deepfake is a clear-cut example of undermining trust in institutions and officials by targeting the president-elect, the first woman speaker of the U.S. House of Representatives, and a progressive woman of color. This pattern is also international; in one example related to the ongoing Israel-Hamas War, users shared a deepfake video of Shani Louk, a murdered festival-goer, reanimated to highlight the atrocities committed to her and others. The video was shared without her consent and with the purpose of pursuing political gain or sympathy. This reaffirms the point that AI deepfake technology is now posing another immense challenge for women to shoulder.

Contributing further evidence-based findings to this topic, the Center for Strategic and International Studies recently unveiled a study shedding more light on the deepfake conundrum. This research, which surveyed public opinion on cybersecurity vulnerabilities within the federal government, revealed a significant gender divide: men are about 27 percent less likely than women to view deepfakes as a serious threat. This statistic not only reinforces the existence of a gender divide in perceptions of cybersecurity threats but also underscores the need for tailored awareness and educational initiatives that address and bridge this gap for men.

The findings show a return to countervalue targeting and underscore the significance of attacks on noncombatants and critical infrastructure. Simply put, “cyber war” in the eyes of the public surveyed wasn’t a realm of knockout blows to military command networks. The survey experiments and games showed that modern political warfare through cyberspace finds greater utility in sowing discord by disrupting basic need services associated with food and medical assistance. During discussions, participants went further and discussed the role of deepfakes in polarizing society; examples included the manipulation of federal labor statistics, statements by leaders in civil service, and even the disruption of payment systems. As 2024 ushers in more than 50 elections across the globe, these findings foreshadowed what civil servants are experiencing in Mexico, India, and the European Union.

Legal Protections against Digital Exploitation

The study reveals what is already apparent: women often find themselves isolated in their efforts to safeguard their images and reputations against digital manipulation. However, the landscape of legal protection is not barren. In 2024, a legislative framework has emerged, with 48 states enacting laws to criminalize revenge pornography. And, following the momentum generated by the “Swift Army,” four states have introduced legislation specifically targeting AI-generated pornography. This legislative progress is timely as the misuse of deepfake technology now extends to child exploitation. Recent revelations of school board meetings avoiding discussions about the spread of these images, from middle school through high school, spotlight the dual threats of cyberbullying and child exploitation. This evolving crisis calls for a fortified legal and social response to protect the most vulnerable from the perils of AI misuse.

The Shortcomings of Current Responses

Historically, the method for addressing nonconsensual videos has been through straightforward, submitted requests for takedowns—tackling the problem one case at a time. This approach, however, merely acts as a temporary solution, failing to address the systemic, core issue. Real change requires a stable, dedicated, and concerted movement (rather than a spontaneous one), akin to the movement mobilized by Taylor Swift's fans, the Swifties, who rallied to protect Swift’s image, reputation, and mental well-being against the onslaught of AI porn. In January 2024, an AI challenge on an online media platform led to the creation and spread of deepfake pornography featuring Taylor Swift. These images, disseminated across social media platforms like Telegram and X (previously Twitter), prompted Swift’s fanbase to initiate a digital counteroffensive. They flooded X with authentic photos to overshadow the deepfakes, prompting the platform to implement a blanket ban on searches related to Taylor Swift, which achieved partial success in stemming the spread. Despite these efforts, the response by Swifties and X’s intervention highlights a makeshift and unsustainable strategy in the broader battle against AI-generated pornography targeting women. Acknowledging the limitations of short-term fixes to the scourge of AI deepfake pornography paves the way to confronting a wider, societal apprehension on this matter—a concern that finds its echo in the broader population's varying perceptions of digital authenticity and the safety of women and children.

Forging Protections for Gender Dignity in the Digital Age

The call to action from policymakers is not just overdue—it's a pressing necessity as women continue to be victimized by this technology. Ensuring the safety of women from AI-generated harm shouldn't necessitate the mobilization of a superstar's fanbase; it ought to be a fundamental action, undertaken because safeguarding the rights and dignity of women—acknowledged globally as a vulnerable group—is the right thing to do. This battle, distinct from the ongoing struggles for reproductive rights or to close the gender pay gap, zeroes in on the basic human right of women to live free from abuse. It's a fight not without precedent: in 2003, the U.S. Congress enacted the PROTECT Act, aimed at shielding children from computer-generated pornography. More recently, the Federal Bureau of Investigation (FBI) has been investigating and arresting individuals for prosecution under this law. In fact, the Department of Justice (DOJ) has successfully prosecuted individuals responsible for creating AI-generated explicit content of children, securing convictions with penalties up to 40 years in prison. There is debate among legal scholars about the need to update legislative language regarding the depiction of unclothed minors, similar to a new law in the state of Washington. Nonetheless, these recent convictions and warnings by the FBI demonstrate a firm legislative and enforcement foundation on which to expand protections against digital abuses targeting women.

The next legislative step involves enabling the DOJ and the FBI to comprehensively address violations against all women, irrespective of their background, occupation, or status. To achieve this, legislators have introduced amendments  to Title 15, U.S.C. § 6851 through the enactment of the bipartisan DEFIANCE Act. This act expands upon the foundations laid by the PROTECT Act, redefining legal protections for women by classifying what were formerly known as “computer-generated images” as “digital forgeries.” The DEFIANCE Act aims to provide victims with clearer legal recourse and improved definitions for enforcement. Moreover, it establishes the potential for victims to recover damages in civil suits up to $150,000. The enactment of this legislation would affirm Congress’s dedication to safeguarding all citizens against these burgeoning threats, establishing a robust legal framework that not only deters such malicious activities but also upholds the dignity and rights of women in the digital era.

Jose M. Macias III is a research associate in the Futures Lab within the International Security Program at the Center for Strategic and International Studies (CSIS) in Washington, D.C. Audrey Aldisert is a research assistant in the International Security Program at CSIS. Benjamin Jensen is a senior fellow in the Futures Lab at CSIS.

Benjamin Jensen
Senior Fellow, Futures Lab, International Security Program