153 research outputs found

    On Generative Adversarial Network Based Synthetic Iris Presentation Attack And Its Detection

    Get PDF
    Human iris is considered a reliable and accurate modality for biometric recognition due to its unique texture information. Reliability and accuracy of iris biometric modality have prompted its large-scale deployment for critical applications such as border control and national identification projects. The extensive growth of iris recognition systems has raised apprehensions about the susceptibility of these systems to various presentation attacks. In this thesis, a novel iris presentation attack using deep learning based synthetically generated iris images is presented. Utilizing the generative capability of deep convolutional generative adversarial networks and iris quality metrics, a new framework, named as iDCGAN is proposed for creating realistic appearing synthetic iris images. In-depth analysis is performed using quality score distributions of real and synthetically generated iris images to understand the effectiveness of the proposed approach. We also demonstrate that synthetically generated iris images can be used to attack existing iris recognition systems. As synthetically generated iris images can be effectively deployed in iris presentation attacks, it is important to develop accurate iris presentation attack detection algorithms which can distinguish such synthetic iris images from real iris images. For this purpose, a novel structural and textural feature-based iris presentation attack detection framework (DESIST) is proposed. The key emphasis of DESIST is on developing a unified framework for detecting a medley of iris presentation attacks, including synthetic iris. Experimental evaluations showcase the efficacy of the proposed DESIST framework in detecting synthetic iris presentation attacks

    Taming Self-Supervised Learning for Presentation Attack Detection: De-Folding and De-Mixing

    Full text link
    Biometric systems are vulnerable to Presentation Attacks (PA) performed using various Presentation Attack Instruments (PAIs). Even though there are numerous Presentation Attack Detection (PAD) techniques based on both deep learning and hand-crafted features, the generalization of PAD for unknown PAI is still a challenging problem. In this work, we empirically prove that the initialization of the PAD model is a crucial factor for the generalization, which is rarely discussed in the community. Based on such observation, we proposed a self-supervised learning-based method, denoted as DF-DM. Specifically, DF-DM is based on a global-local view coupled with De-Folding and De-Mixing to derive the task-specific representation for PAD. During De-Folding, the proposed technique will learn region-specific features to represent samples in a local pattern by explicitly minimizing generative loss. While De-Mixing drives detectors to obtain the instance-specific features with global information for more comprehensive representation by minimizing interpolation-based consistency. Extensive experimental results show that the proposed method can achieve significant improvements in terms of both face and fingerprint PAD in more complicated and hybrid datasets when compared with state-of-the-art methods. When training in CASIA-FASD and Idiap Replay-Attack, the proposed method can achieve an 18.60% Equal Error Rate (EER) in OULU-NPU and MSU-MFSD, exceeding baseline performance by 9.54%. The source code of the proposed technique is available at https://github.com/kongzhecn/dfdm.Comment: Accepted by IEEE Transactions on Neural Networks and Learning Systems (TNNLS

    Preliminary Forensics Analysis of DeepFake Images

    Full text link
    One of the most terrifying phenomenon nowadays is the DeepFake: the possibility to automatically replace a person's face in images and videos by exploiting algorithms based on deep learning. This paper will present a brief overview of technologies able to produce DeepFake images of faces. A forensics analysis of those images with standard methods will be presented: not surprisingly state of the art techniques are not completely able to detect the fakeness. To solve this, a preliminary idea on how to fight DeepFake images of faces will be presented by analysing anomalies in the frequency domain.Comment: Accepted at IEEE AEIT International Annual Conference 202
    • …
    corecore