A series of procedural generated faces shown in a grid pattern.
meyer_solutions/Shutterstock.com

Deepfakes make it possible to replicate the voice and appearance of people. A deepfake creator can make that replica say or do almost anything. Even worse, it’s becoming almost impossible to identify a deepfake. How can you deal with this?

Deepfakes in a Nutshell

Deepfakes are named after deep learning technology, a specific type of machine learning method that uses artificial neural networks. Deep learning is an important part of how “machine vision” works. That’s the field of artificial intelligence that allows computer systems to, for example, recognize objects. Machine vision technology makes everything from self-driving cars to Snapchat filters possible.

A deepfake is when you use this technology to swap the face of one person out for another in a video. Deepfake technology can now also be applied to voices, so both the face and voice of an actor in a video can be changed to someone else.

Deepfakes Used to Be Easy to Spot

Wax replicas of Michelle and Barack Obama at a museum.
NikomMaelao Production/Shutterstock.com

In the early days, it was trivial to spot a deepfake. Similar to a celebrity wax figure, anyone looking at one could feel that something was off about it. As time went by, the machine learning algorithms improved little by little.

Today, high-quality deepfakes are good enough that average viewers can’t tell, especially when the videos are somewhat masked by the low fidelity nature of social media video sharing. Even experts can have a hard time telling the best deepfakes apart from real footage by eye. This means that new tools have to be developed to detect them.

Using AI to Spot Deepfakes

In a real-world example of fighting with fire with fire, researchers have come up with AI software of their own that can detect deepfake video, even when humans can’t. Smart people over at MIT created the Detect Fakes project to demonstrate how these videos can be detected.

Advertisement

So although you may not be able to catch these deepfake videos by eye anymore, we can have some peace of mind at the fact that there are software tools that can do the job. There are already apps that claim to detect deepfakes for download. Deepware is one example, and as the need for more deepfake detection increases, we’re sure there will be many more.

So, problem solved? Not quite! The technology to create deepfakes is now in competition with the technology to detect it. There may come a point at which deepfakes become so good that even the best AI detection algorithm won’t be very confident that a video is faked or not. We aren’t there yet, but for the average person simply browsing the web, perhaps we don’t need to be at that point of deepfake advancement for it to be an issue.

How to Handle a World of Deepfakes

So, if you cannot reliably tell whether a video you see of, for example, a county’s president is real, how can you make sure you don’t get fooled?

The fact of the matter is that it’s never been a good idea to use a single source of information as your only source. If it’s about something important, you should check multiple independent sources that report the same information, but not from the potentially fake material.

Even without the existence of deepfakes, it’s already crucial that users on the internet stop and validate important types of information related to topic areas such as government policy, health, or world events. It’s obviously impossible to corroborate everything, but when it comes to the important stuff it’s worth putting in the effort.

It’s especially important not to pass on a video unless you are almost 100% sure that it’s real. Deepfakes are only a problem because they are shared uncritically. You can be the one that breaks part of that chain of virality. It takes less effort not to send along a potentially faked video than to share it, after all.

Advertisement

Additionally, you don’t need the power of a deepfake detecting AI to be suspicious of a video. The more outrageous a video is, the more likely it’s a fake. If you see a video of a NASA scientist saying the moon landing was faked or that his boss is a lizard, it should immediately raise a red flag.

Trust No One?

Young man sitting at a desk with a laptop and a tin foil hat.
Patrick Daxenbichler/Shutterstock.com

Being completely paranoid that everything you see or hear in a video is possibly fake and meant to fool or manipulate you somehow is a scary thought. It’s also probably not a healthy way to live! We’re not suggesting that you need to put yourself in such a state of mind, but rather that you should rethink how credible video or audio evidence is.

Deepfake technology does mean that we need new ways to verify media. There are, for example, people working on new ways to watermark videos so that any alteration can’t be hidden. When it comes to you as a regular normal user of the internet, however, the best thing you can do is err on the side of skepticism. Assume that a video could have been altered totally until it’s corroborated by a primary source, such as a reporter who has interviewed the subject directly.

Perhaps the most important thing is that you should simply be aware of how good deepfake technology is today or will be in the near future. Which, as you’ve now reached the end of this article, you definitely are.

Profile Photo for Sydney Butler Sydney Butler
Sydney Butler has over 20 years of experience as a freelance PC technician and system builder. He's worked for more than a decade in user education and spends his time explaining technology to professional, educational, and mainstream audiences. His interests include VR, PC, Mac, gaming, 3D printing, consumer electronics, the web, and privacy. He holds a Master of Arts degree in Research Psychology with a focus on Cyberpsychology in particular.
Read Full Bio »