5 research outputs found

    Restrictive Voting Technique for Faces Spoofing Attack

    Get PDF
    Face anti-spoofing has become widely used due to the increasing use of biometric authentication systems that rely on facial recognition. It is a critical issue in biometric authentication systems that aim to prevent unauthorized access. In this paper, we propose a modified version of majority voting that ensembles the votes of six classifiers for multiple video chunks to improve the accuracy of face anti-spoofing. Our approach involves sampling sub-videos of 2 seconds each with a one-second overlap and classifying each sub-video using multiple classifiers. We then ensemble the classifications for each sub-video across all classifiers to decide the complete video classification. We focus on the False Acceptance Rate (FAR) metric to highlight the importance of preventing unauthorized access. We evaluated our method using the Replay Attack dataset and achieved a zero FAR. We also reported the Half Total Error Rate (HTER) and Equal Error Rate (EER) and gained a better result than most state-of-the-art methods. Our experimental results show that our proposed method significantly reduces the FAR, which is crucial for real-world face anti-spoofing applications

    Replayed video attack detection based on motion blur analysis

    No full text
    Abstract Face presentation attacks are the main threats to face recognition systems, and many presentation attack detection (PAD) methods have been proposed in recent years. Although these methods have achieved significant performance in some specific intrusion modes, difficulties still exist in addressing replayed video attacks. That is because the replayed fake faces contain a variety of aliveness signals, such as eye blinking and facial expression changes. Replayed video attacks occur when attackers try to invade biometric systems by presenting face videos in front of the cameras, and these videos are often launched by a liquid-crystal display (LCD) screen. Due to the smearing effects and movements of LCD, videos captured from the real and replayed fake faces present different motion blurs, which are reflected mainly in blur intensity variation and blur width. Based on these descriptions, a motion blur analysis-based method is proposed to deal with the replayed video attack problem. We first present a 1D convolutional neural network (CNN) for motion blur intensity variation description in the time domain, which consists of a serial of 1D convolutional and pooling filters. Then, a local similar pattern (LSP) feature is introduced to extract blur width. Finally, features extracted from 1D CNN and LSP are fused to detect the replayed video attacks. Extensive experiments on two standard face PAD databases, i.e., relay-attack and OULU-NPU, indicate that our proposed method based on the motion blur analysis significantly outperforms the state-of-the-art methods and shows excellent generalization capability

    Replayed Video Attack Detection Based on Motion Blur Analysis

    No full text
    corecore