Security researchers continually explore methods of fooling biometric-based security features. The latest is a new attack which can defeat face authentication systems using random photos of the target user taken from social media.

Researchers from the University of North Carolina at Chapel Hill were able to create realistic, textured 3D facial models that basically undermine the security of widely used authentication solutions.

According to the researchers, they leveraged virtual reality (VR) systems, incorporated the ability to perform human-like animations – such as smiling or raising an eyebrow – on the facial model, with the aim of tricking liveness detectors into believing the 3D model is a real human face.

During the process, the synthetic face of the user is displayed on the screen of the VR device, and as it rotates and translates in the real world, the 3D face moves accordingly.

This depth and motion cues of the display match what would be expected of a human face, convincing the face authentication system that the 3D model is indeed a human face.

The researchers argue that such VR-based spoofing attacks constitute what they called a “fundamentally new class of attacks” that point to a serious flaw in camera-based systems.

Unless other sources of verifiable data are incorporated, systems relying on color image data and camera motion are prone to attacks through virtual realism, argue the researchers.

This is worrying because face authentication systems have become increasingly popular as an enhanced security feature in both mobile devices and desktop computers.

While the market is segmented based on the type of technology offered (e.g., 2D facial recognition, 3D recognition, and facial analytics/face biometric authentication), Gartner research estimates that the overall market will grow to over $6.5 billion in 2018 (compared to roughly $2 billion today).

The three prominent classes of spoofing attacks include:

  • Still-image-based attacks (which are are the easiest to perform). They can be easily countered by detecting the 3D structure of the face.
  • Video-based spoofing is more difficult to accomplish because facial videos of the target user may be harder to come by.
  • 3D-mask-based approaches, on the other hand, are harder to counter. That said, building a 3D mask is arguably more time-consuming and also requires specialized equipment.

The researchers demonstrated that an accurate facial model can be built using only a handful of publicly accessible photos — collected, for example, from social network websites — of the victim. They identified two main challenges:

  • The number of photos of the target may be limited
  • For each available photo, the illumination setting is unknown and the user’s pose and expression may or may not be constrained.

To overcome these challenges, the researchers leveraged robust, publicly available 3D face reconstruction methods from the field of computer vision, and adapted these techniques to fit their needs. Once a credible synthetic model of a user was obtained, they employed entry-level virtual reality displays to defeat the state of the art in liveness detection.

The researchers conclude:
[The ability of an adversary to recover an individual’s facial characteristics through online photos is an immediate and very serious threat, albeit one that clearly cannot be completely neutralized in the age of social media. Therefore, it is prudent that face recognition tools become robust against such threats in order to remain a viable security option in the future.

At a minimum, it is imperative that face authentication systems be able to reject synthetic faces with low resolution textures, as we show in our evaluations. Of more concern, however, is the increasing threat of virtual reality, as well as computer vision, as an adversarial tool. It appears to us that the designers of face authentication systems have assumed a rather weak adversarial model wherein attackers may have limited technical skills and be limited to inexpensive materials.

This practice is risky, at best. Unfortunately, VR itself is quickly becoming commonplace, cheap, and easy-to-use. Moreover, VR visualizations are increasingly convincing, making it easier and easier to create realistic 3D environments that can be used to fool visual security systems. As such, it is our belief that authentication mechanisms of the future must aggressively anticipate and adapt to the rapid developments in the virtual and online realms.]