add_theme_support( 'post-thumbnails' );Taylor Swift Deepfakes: Depicting Misogyny in the New AI World - BANG.
Kaitlyn O'Connor / Gavel Media

Taylor Swift Deepfakes: Depicting Misogyny in the New AI World

In the past few months, I and many others have had my social media feed filled with Taylor Swift. From football to the Grammys, Swift has been plastered over headlines and has garnered large amounts of positive and negative attention. However, alongside pictures of her at award shows and football games, deepfakes of her have begun to circulate the internet. A deepfake refers to any image manipulated to look real. Sadly, many of these images tend to be pornographic and created without the consent of the subject. Recent sexually explicit deepfakes of Taylor Swift surfaced across internet platforms in late January. Social media platform X (formerly known as Twitter) was the most affected platform with specific images reaching over 47 million views before being taken down. Outraged fans fought the images using the hashtag #ProtectTaylorSwift.

The amount of attention to these images resulted in Congress making a statement regarding deepfakes, referencing Swift as a recent victim in a press conference. Currently, there is no federal law that can be applied to deepfakes; however, now that popstar Swift has been a victim of AI violence, scrutiny has surrounded the dearth of legislation. This awareness has led to the introduction of the Disrupt Explicit Forged Images and Non-Consensual Edits Act by senators Dick Durbin, Lindsey Graham, and Josh Hawley. The Defiance Act will make the proliferation of “digital forgeries” illegal and will institute a ten-year statute of limitations for non-consensual images. Although bipartisan support shows promise for new laws, constantly evolving technology requires legislation to protect victims now. As Durbin’s office stated in a news release, “The laws have not kept up with the spread of this abusive content.”

The circumstances surrounding the proposed bill are unique as it was the fans of Taylor Swift who prompted the need for legislation. Mass-reporting images and a public outcry made X remove the images from their platform. X went so far as to block Swift’s name entirely from being searched. It is disappointing that legislators only care about the issues surrounding non-consensual digital pornography when it affects Taylor Swift rather than the hundreds of other women new technology is targeting. It seems legislators are only aware when Swift’s fanbase, made up of mostly women, draws attention to the issues. This reality shows that women are always tasked with protecting other women before others will take action. Noelle Martin, a survivor of image-based abuse, says “Everyday women like me will not have millions of people working to protect us and to help take down the content, and we won’t have the benefit of big tech companies, where this is facilitated, responding to the abuse.” Even seventeen-year-old Marvel actor Xochitl Gomez was subjected to a failure by X to remove pornographic deepfakes of her. It is important to recognize that the privilege of having an audience capable of action like this applies mainly to white celebrities.

This privilege is important to analyze the immense dangers of AI deepfakes. The degrading images have the potential to affect jobs, relationships, and the victim’s image, even if they are not real images. Laura Bates, author of Men Who Hate Women, was a victim of edited explicit images. She states “There’s something really visceral about seeing an incredibly hyper-realistic image of yourself in somebody’s extreme misogynistic fantasy of you.” Deeptrace Labs conducted a study in which “96 percent of all deepfake videos were pornographic and nonconsensual. The top four websites dedicated to hosting deepfakes received a combined 134 million views on such videos, and on such websites, a full 100 percent of the videos’ subjects were women.” These images also have the potential to harm news and politics, but a major problem right now is the hundreds of women being degraded by the technology.

Much of the media content warning against AI deepfakes references potential political effects and false news being perpetuated instead of focusing on the harm to women. In fact, an analysis by LinkedIn showed that “only 22% of AI professionals globally are female.” The lack of voices for women in the AI space limits the protection that women will have from the use of technology for degradation and misogyny. Although it should not be only the role of women to place protections, the current situation makes it vital for a change to be made. The lack of women in the tech industry silences the minority of women who are there. 

When looking at statistics, a 2019 report demonstrated that “non-Western female subjects featured in almost a third of deepfake pornography websites, with South Korean K-pop singers making up a quarter of the subjects targeted.” Although hundreds of women are being targeted, Taylor Swift, a white woman, is the singular name that has caused great awareness for a solution to the extent of mobilizing Congress. Although Taylor Swift can be a catalyst, we need to make sure her publicity does not minimize the effects of deepfakes on others. 

+ posts

Comments