A woman's face outlined with a grid. This grid is used to identify her face.
Stanislaw Mikulski/Shutterstock

Most people are comfortable with facial recognition for its use in Instagram filters and Face ID. But this relatively new technology can feel a little creepy. Your face is like a fingerprint, and the technology behind facial recognition is complex.

As with any new technology, there are downsides to facial recognition. These downsides are becoming more apparent as the military, the police, advertisers, and deepfake creators, find devious new ways to take advantage of facial recognition software.

Now, more than ever, it’s essential for people to understand how facial recognition works. It’s also important to know the limitations of facial recognition and how it will develop in the future.

Facial Recognition Is Surprisingly Simple

Before getting into the many different mediums for facial recognition, it’s important to understand how the process of facial recognition works. Here are three applications for facial recognition software, and a simple explanation for how they recognize or identify faces:

  • Basic Facial Recognition: For Animoji and Instagram filters, your phone camera “looks” for the defining features of a face, specifically a pair of eyes, a nose, and a mouth. Then, it uses algorithms to lock onto a face and determine which direction it’s looking, if its mouth is open, etc. It’s worth mentioning that this isn’t facial identification, it’s just software looking for faces.
  • Face ID and Similar Programs: Upon setting up Face ID (or similar programs) on your phone, it takes a photo of your face and measures the distance between your facial features. Then, every time you go to unlock your phone, it “looks” through the camera to measure and confirm that your identity.
  • Identifying a Stranger: When an organization wants to identify a face for security, advertising, or policing purposes, it uses algorithms to compare that face to an extensive database of faces. This process is nearly identical to Apple’s Face ID but on a larger scale. Theoretically, any database could be used (ID cards, Facebook profiles), but a database of clear, pre-identified photos is ideal.

Alright, let’s get into the nitty-gritty. Because the “basic facial recognition” used for Instagram filters is such a simple and harmless process, we’re going to focus entirely on facial identification, and the many different technologies that can be used to identify a face.

Most Facial Recognition Relies on 2D Images

As you’d expect, most facial recognition software relies entirely on 2D images. But this isn’t done because 2D facial imaging is super accurate, it’s done for the sake of convenience. The overwhelming majority of cameras take photos without any depth, and public photos that can be used for facial recognition databases (Facebook profile pictures, for example) are all in 2D.

A man using facial recognition tech to identify a subject from a database.
Zapp2Photo/Shutterstock

Why isn’t 2D facial imaging super accurate? Well, because a flat image of your face lacks identifying features, like depth. With a flat image, a computer can measure your pupillary distance, and width of your mouth, among other variables. But it can’t tell the length of your nose or the prominence of your forehead.

Additionally, 2D facial imaging relies on the visible light spectrum. This means that 2D facial imaging doesn’t work in the dark, and it can be unreliable in funky or shadowy lighting conditions.

Clearly, the way around some of these shortcomings is to use 3D facial imaging. But how is that possible? Do you need special equipment to see a face in 3D?

IR Cameras Add Depth to Your Identity

While some facial recognition applications rely solely on 2D images, it isn’t uncommon to for facial recognition to rely on 3D imaging as well. In fact, your experience with facial recognition probably involves a pinch of 3D.

This is achieved through a technique called lidar, which is similar to sonar. Essentially, face scanning devices, like your iPhone, blast a harmless IR matrix at your face. This matrix (a wall of lasers) then reflects off your face and gets picked up by an IR camera (or ToF camera) on your phone.

A woman using Face ID, or a similar IR-based facial recognition technology.
Prostock-Studio/Shutterstock

Where does the 3D magic happen? Your phone’s IR camera measures how long it takes for each bit of IR light to bounce off of your face and return to the phone. Naturally, the light that reflects off of your nose will have a shorter journey than the light that reflects off your ears, and the IR camera uses this information to create a unique depth map of your face. When used alongside basic 2D imaging, 3D imaging can significantly increase the accuracy of facial recognition software.

Lidar imaging is a weird concept that can be difficult to wrap your head around. If it helps, try to imagine that the IR mesh from your phone (or any facial recognition device) is a pin-board toy. Like a pin-board toy, your face leaves an indentation in the IR mesh, where your nose is noticeably deeper than, say, your eyes.

Thermal Imaging Lets Facial Recognition Work at Night

One of the shortcomings of 2D facial recognition is that it relies on the visible spectrum of light. In layman’s terms, basic facial recognition doesn’t work in the dark. But this can be worked around by using a thermal imaging camera (yeah, like in Tom Clancy).

“Wait a minute,” you might say, “doesn’t thermal imaging rely on IR light?” Yes, it does. But thermal imaging cameras don’t send out blasts of IR light; they simply detect the IR light that emits from objects. Warm objects emit a ton of IR light, while cold objects emit a negligible amount of IR light. Expensive thermal imaging cameras can even detect subtle temperature differences across a surface, so the technology ideal for facial recognition.

Three photos. The first is from the visible light spectrum, the second is a still thermal image, and the third is a composite thermal image.
A visible light spectrum image, a thermal image, and a composite thermal image. Polaris Sensor Technologies Inc

There are a handful of different ways to identify a face with thermal imaging. All of these techniques are incredibly complicated, but they share some fundamental similarities, so we’re going to try and keep things simple with a list:

  • Multiple Photos Are Needed: A thermal imaging camera takes multiple pictures of a subject’s face. Each photo focuses on a different spectrum of IR light (long, short, and medium waves). Typically, the long wave spectrum provides the most facial detail.
  • Blood Vessel Maps Are Useful: These IR images can also be used to extract the formation of blood vessels in a person’s face. It’s creepy, but blood vessel maps can be used like unique facial fingerprints. They can also be used to find the distance between facial organs (if typical thermal imaging yields shoddy pictures) or to identify bruises and scars.
  • The Subject Can Be Identified: A composite image (or dataset) is created using multiple IR images. This composite image can then be compared to a facial database to identify the subject.

Of course, thermal facial recognition is usually used by the military, it isn’t something that you’ll find at Khols, and it isn’t something that’ll come with your next cellphone. Plus, thermal imaging doesn’t work well in the daytime (or in generally well-lit environments), so it doesn’t have many potential applications outside of the military.

Limitations of Facial Recognition

We’ve spent a lot of time talking about the shortcomings of facial recognition. As we’ve seen from IR and thermal imaging, it’s possible to overcome some of these limitations. But there are still a few problems that haven’t been figured out just yet:

  • Obstruction: As you’d expect, sunglasses and other accessories can trip up facial recognition software.
  • Poses: Facial recognition works best with a neutral, frontward-facing image. A tilt or turn of the head can make facial recognition difficult, even for IR-based recognition software. Additionally, a smile, puffed cheeks, or any other pose can change how a computer measures your face.
  • Light: All forms of facial recognition rely on light, whether it’s visible spectrum or IR light. As a result, weird lighting conditions can decrease the accuracy of facial identification. This may change, as scientists are currently developing sonar-based facial recognition technology.
  • The Database: Without a good database, facial recognition can’t work. Along these same lines, it’s impossible to identify a face that hasn’t been identified correctly in the past.
  • Data Processing: Depending on the size and format of a database, it can take a while for computers to identify faces correctly. In some situations, like policing, limitations in data processing restrict the use of facial identification for everyday applications (which is probably a good thing).

As of right now, the best way around these limitations is to use other forms of identification in conjunction with facial recognition. Your phone will ask for a password or a fingerprint if it fails to identify your face, and the Chinese government uses ID cards and tracking technology to close the margin of error that exists in its facial recognition network.

In the future, scientists will surely find a way to get around these issues. They may use sonar technology alongside lidar to create 3D face maps in any environment, and they may find ways to process face data (and identify strangers) in an incredibly short amount of time. Either way, this technology has a lot of potential for abuse, so it’s worth keeping up with.

Sources: The University of Rijeka, The Electronic Frontier Foundation

Andrew Heinzman Andrew Heinzman
Andrew Heinzman writes for How-To Geek and Review Geek. Like a jack-of-all-trades, he handles the writing and image editing for a mess of tech news articles, daily deals, product reviews, and complicated explainers.
Read Full Bio »