Enhanced Deep Learning Architectures for Face Liveness Detection for Static and Video Sequences

Abstract

The major contribution of this research is the development of deep architectures for face liveness detection on a static image as well as video sequences that use a combination of texture analysis and deep Convolutional Neural Network (CNN) to classify the captured image or video as real or fake. Face recognition is a popular and efficient form of biometric authentication used in many software applications. One drawback of this technique is that, it is prone to face spoofing attacks, where an impostor can gain access to the system by presenting a photograph or recorded video of a valid user to the sensor. Thus, face liveness detection is a critical preprocessing step in face recognition authentication systems. The first part of our research was on face liveness detection on a static image, where we applied nonlinear diffusion based on an additive operator splitting scheme and a tri-diagonal matrix block-solver algorithm to the image, which enhances the edges and surface texture in the real image. The diffused image was then fed to a deep CNN to identify the complex and deep features for classification. We obtained high accuracy on the NUAA Photograph Impostor dataset using one of our enhanced architectures. In the second part of our research, we developed an end-to-end real-time solution for face liveness detection on static images, where instead of using a separate preprocessing step for diffusing the images, we used a combined architecture where the diffusion process and CNN were implemented in a single step. This integrated approach gave promising results with two different architectures, on the Replay-Attack and Replay-Mobile datasets. We also developed a novel deep architecture for face liveness detection on video frames that uses the diffusion of images followed by a deep CNN and Long Short-Term Memory (LSTM) to classify the video sequence as real or fake. Performance evaluation of our architecture on the Replay-Attack and Replay-Mobile datasets gave very competitive results. We performed liveness detection on video sequences using diffusion and the Two-Stream Inflated 3D ConvNet (I3D) architecture, and our experiments on the Replay-Attack and Replay-Mobile datasets gave very good results

    Similar works