Abstract artists' rendering of a computer-generated face mask.
shuttersv/Shutterstock.com

The first film, the Roundhay Garden Scene, was filmed just over 130 years ago. During that time, the humans we’ve seen in live-action films have been real people, but since the advent of computer graphics, that’s started to change.

We’re Leaving the Uncanny Valley

Human faces have been the hardest thing to replicate in a convincing way using computer graphics (CG) or even practical special effects. Human brains are supremely adapted to recognize faces, so much so that we see faces that aren’t even there. Fooling audiences into believing that a CG face is real almost never works, with some that come close falling into the so-called “uncanny valley“.

The uncanny valley is a point in the continuum of facial realism where we start to feel creeped out by artificial human faces. Some films, such as the Polar Express, are notorious for it. Advances in processing power and rendering methods, as well as machine learning solutions like facial deepfakes, are changing this situation. Even real-time graphics on modern gaming consoles can get very close to photorealistic faces.

The Matrix Awakens Unreal Engine 5 demo shows this off in a stunning fashion. It runs on a humble home console, but in many scenes both reproduction of real actors and original CG characters look real. Another fantastic example is the Netflix anthology series Love, Death + Robots. Some of the episodes have CG faces that are impossible to class as CG or real.

The Golden Age of Performance Capture

A person in a motion capture outfit being filmed in a movie studio.
Gorodenkoff/Shutterstock.com

Leaving the uncanny valley is about more than just making a photorealistic face, You also have to get the moment of the character’s face and body right. Filmmakers and video game developers now have the tools to precisely capture the movements and facial expressions of actors like Andy Sirkis, who specializes in digitally-captured performances.

Advertisement

Most of the pioneering work was evident in James Cameron’s Avatar, which still holds up today. But there’s no shortage of great modern motion capture work to admire. Thanos, from the Marvel cinematic universe, is another notable and more recent example.

Avatar (Three-Disc Extended Collector's Edition + BD-Live)

James Cameron’s groundbreaking film still holds up as an amazing VFX achievement, even if the story takes something of a back seat to the spectacle.

The strength of performance capture is that the performer doesn’t have to look at all like the CG character. They don’t even have to be the same species! It also means that a single actor or a small group of actors can play an entire cast of characters. You can also have actors and stunt people providing motion data for the same character.

Deepfaked Voices Are Being Perfected

Creating a believable synthetic actor is about more than a photorealistic visual production. Actors use their voices to complete their performances and it’s perhaps just as important as everything else put together. When you want to give your CG actor a voice, you have a few options.

You can use a voice actor, which is fine it’s an original character. If you’re recreating a character whose original actor is still alive, then you can just dub their vocal performance in. You can also use an impersonator when you’re bringing back someone who has passed away or has a serious scheduling conflict. In Rogue One, when we met a resurrected likeness of Peter Cushing as Grand Moff Tarkin, Guy Henry provided both the voice and performance capture.

The most interesting example comes from Luke Skywalker’s appearance as his young self in the Mandalorian (and later in the Book of Boba Fett). Rather than have Mark Hamill record lines for the scene, which used a stand-in actor with a CG face, AI software was employed.

Advertisement

The app is called Respeecher, and by feeding it recorded material of Mark Hamill from that era of his career, it was possible to create a synthetic replica of it. There’s no doubt that audio deepfakes have come a long way in a short time since most people didn’t even realize they weren’t actually listening to Hamill.

Real-Time Deepfakes Are Emerging

Deepfakes are becoming real competition for traditional CG faces. Luke Skywalker’s CG face in the Mandalorian didn’t look great, but YouTubers Shamook used deepfake technology to spruce up that face, dragging it to the more palatable side of the uncanny valley. VFX YouTubers Corridor Crew went a step further and reshot the entire scene with their own actor, using deepfake technology instead of a CG face.

The results are amazing, but even high-end computers take a long time to create a deepfake video of this quality. It’s nothing like the render farm requirements of modern CG, but it also isn’t trivial. However, computer power marches ever on and it’s now possible to do a certain level of deepfakery in real-time. As specialized machine learning computer chips get better, it may eventually be much faster, cheaper, and realistic to have deepfake technology handle the faces of synthetic actors.

AI Can Create Original Faces

This Person Does Not Exist
This Person Does Not Exist

Everyone can recognize Brad Pitt or Patrick Stewart thanks to seeing their faces in films and on TV hundreds of times. These actors use their real faces to portray characters and so we associate their faces with those characters. Things are different in the world of 2D and 3D animation or comic books. Artists create characters that don’t look like any real person. Well, at least not on purpose!

With AI software, it’s now possible to do something similar with photorealistic human faces. You can head over to This Person Does Not Exist and generate a realistic face of someone who isn’t real in seconds. Face Generator takes it further and lets you tweak your imaginary person’s parameters. If those faces still look a little fake to you, check out NVIDIA’s AI face generation software StyleGAN, which is available to everyone as open-source software.

Advertisement

Generated faces like these can be combined with synthesized voices and performance capture to give us a character who doesn’t look like any actor that actually exists. Eventually, we may not need a human to provide the performance. Promising a future where entire stories can be told by a puppeteer in control of a synthetic cast.

The Truth Is Stranger Than Fiction

In the 2002 film S1m0neAl Pacino plays a director who discovers experimental software that lets him create a synthetic CG actress whole cloth. He puppeteers her to stardom, but eventually, the mask slips and the world discovers they’ve been fawning over someone who never existed.

The trajectory of technology in the real world today is making an actress like Simone a realistic possibility at some point, no longer science fiction. The only difference is that Simone had to be kept secret or the public would revolt. In the real world, we’ll all know our actors are synthetic, we just won’t care.

RELATED: How to Deal With Undetectable Deepfakes

Profile Photo for Sydney Butler Sydney Butler
Sydney Butler has over 20 years of experience as a freelance PC technician and system builder. He's worked for more than a decade in user education and spends his time explaining technology to professional, educational, and mainstream audiences. His interests include VR, PC, Mac, gaming, 3D printing, consumer electronics, the web, and privacy. He holds a Master of Arts degree in Research Psychology with a focus on Cyberpsychology in particular.
Read Full Bio »