Picture this: you’re scrolling through social media and come across a video of a celebrity. They’re sitting and talking to the camera, a pretty normal occurrence, yet something seems off. You get this nagging feeling at the back of your head that you just can’t seem to brush off, a feeling saying that the video is not quite right and that something’s wrong. Not only does what they’re saying seem incorrect, but they look slightly off, slightly mechanical. Welcome to the world of deepfakes. A deepfake is when artificial intelligence, along with various algorithms, makes a video of a person appear as if they are saying or doing something they aren’t. If you're like me, chances are you’ve come across various deepfakes of either celebrities or politicians saying funny things that they never did. A quick laugh and a share to your friends and you gloss over it as just a funny video. And while some of them do appear humorous and light-hearted on the surface, the dark reality of deepfakes is spreading far too quickly for anyone to catch up.
Deepfake technology is advancing rapidly and it’s becoming increasingly difficult to spot, with the line between real and fake getting softer and softer. This comes to be a big concern when one relies on video or audio as evidence that can be trusted. General distrust in certain media types is nothing new, but this emerging uncertainty in video evidence can have detrimental effects. If we continue on the path that we’re on, soon any video, whether that be one of a politician, celebrity, or regular citizen, will not be able to be trusted as truth. There will have to be constant scrutiny over what can be deemed as factual, which is not only bad in criminal investigations, but also for public trust and opinions. If every video we see has the possibility of being manipulated and changed, the general public won’t know where to look to find the truth.
Yet an even more concerning aspect is that deepfakes are becoming more widely available for everyday people to use and take advantage of. This becomes a huge issue when seen that 96% of deepfake videos are used for pornographic content. This Vox article points out how celebrity women are being put into explicit videos without their consent. This perverted method of non-consensually putting women in these types of videos is horrifying, especially with the lack of regulation surrounding it. It seems as if as the world advances there emerge more and more ways to increasingly objectify women. The terrifying things about deepfakes are both the difficulty in catching them and their growing accessibility.
This came to head with a recent incident of a Pennsylvania mother creating deepfake videos of a young group of girls. These girls were cheerleading rivals of her daughter, and the mom created a variety of videos and images in hopes of getting them kicked off the team. The mother sent manipulated videos and photos of the young girls naked, drinking, and smoking to the coaches and to other members of the cheerleading team. These deepfakes, police determined, were created using the girls’ Instagram photos and mapping them onto other images. While this incident was addressed and the mother is currently awaiting trial, it goes to show just how easily deepfake technology can be reproduced and the horrifying things people could use it for.
Advancements in technology have many positive benefits. Yet, we see with it the emergence of a darker side often rooted in objectifying and over-sexualizing women, and deepfakes are no different. This step in non-consensually putting women (some even underage) in pornographic videos not only shows the disgusting and perverted tendencies so deep-rooted in our society, but also shines a light on the overall lack of regulation around advancing technologies. Should technology companies step up? Should social media outlets on which the videos are spread intervene? Should it be up to the U.S. government? It seems like aspects of technology are so rapidly advancing that regulations and laws can’t keep up, but they need to. This issue surrounding deepfakes isn’t ending anytime soon, and the longer we put off properly addressing it the more widespread and damaging it will become.