37 research outputs found
Face Spoof Detection from Single Image Using Various Parameters
To detect duplication of identity during authentication of online payment on mobile or personal computer, the automatic face recognition is widely used now days. The biometric presentation attacks can be performed to gain access to these systems. It is performed by presenting the authorized person’s photo or video. Hence it is important to study the various face spoof attacks. Currently proposed face spoof detection techniques have less generalization ability as these are not considering all factors and do not detect the spoofing medium.The four features such as specular reflection, blurriness, chromatic moment and color diversity are used to analyze the image distortion. The different classifiers are trained for printed photo attack and video replay attack to differentiate between genuine and spoof faces. We also propose an approach to detect the spoofing medium by checking the boundary of the captured image during the photo attack and video attack and an approach to detect the blinking of eye for detecting liveness. It gives us high efficiency rather than existing methods
PipeNet: Selective Modal Pipeline of Fusion Network for Multi-Modal Face Anti-Spoofing
Face anti-spoofing has become an increasingly important and critical security
feature for authentication systems, due to rampant and easily launchable
presentation attacks. Addressing the shortage of multi-modal face dataset,
CASIA recently released the largest up-to-date CASIA-SURF Cross-ethnicity Face
Anti-spoofing(CeFA) dataset, covering 3 ethnicities, 3 modalities, 1607
subjects, and 2D plus 3D attack types in four protocols, and focusing on the
challenge of improving the generalization capability of face anti-spoofing in
cross-ethnicity and multi-modal continuous data. In this paper, we propose a
novel pipeline-based multi-stream CNN architecture called PipeNet for
multi-modal face anti-spoofing. Unlike previous works, Selective Modal Pipeline
(SMP) is designed to enable a customized pipeline for each data modality to
take full advantage of multi-modal data. Limited Frame Vote (LFV) is designed
to ensure stable and accurate prediction for video classification. The proposed
method wins the third place in the final ranking of Chalearn Multi-modal
Cross-ethnicity Face Anti-spoofing Recognition Challenge@CVPR2020. Our final
submission achieves the Average Classification Error Rate (ACER) of 2.21 with
Standard Deviation of 1.26 on the test set.Comment: Accepted to appear in CVPR2020 WM