8 research outputs found

    Statistical meta-analysis of presentation attacks for secure multibiometric systems

    Get PDF
    Prior work has shown that multibiometric systems are vulnerable to presentation attacks, assuming that their matching score distribution is identical to that of genuine users, without fabricating any fake trait. We have recently shown that this assumption is not representative of current fingerprint and face presentation attacks, leading one to overestimate the vulnerability of multibiometric systems, and to design less effective fusion rules. In this paper, we overcome these limitations by proposing a statistical meta-model of face and fingerprint presentation attacks that characterizes a wider family of fake score distributions, including distributions of known and, potentially, unknown attacks. This allows us to perform a thorough security evaluation of multibiometric systems against presentation attacks, quantifying how their vulnerability may vary also under attacks that are different from those considered during design, through an uncertainty analysis. We empirically show that our approach can reliably predict the performance of multibiometric systems even under never-before-seen face and fingerprint presentation attacks, and that the secure fusion rules designed using our approach can exhibit an improved trade-off between the performance in the absence and in the presence of attack. We finally argue that our method can be extended to other biometrics besides faces and fingerprints

    Balancing Accuracy and Error Rates in Fingerprint Verification Systems Under Presentation Attacks With Sequential Fusion

    Get PDF
    The assessment of the fingerprint PADs embedded into a comparison system represents an emerging topic in biometric recognition. Providing models and methods for this aim helps scientists, technologists, and companies to simulate multiple scenarios and have a realistic view of the process’s consequences on the recognition system. The most recent models aimed at deriving the overall system performance, especially in the sequential assessment of the fingerprint liveness and comparison pointed out a significant decrease in Genuine Acceptance Rate (GAR). In particular, our previous studies showed that PAD contributes predominantly to this drop, regardless of the comparison system used. This paper’s goal is to establish a systematic approach for the “trade-off” computation between the gain in Impostor Attack Presentation Accept Rate (IAPAR) and the loss in GAR mentioned above. We propose a formal “trade-off” definition to measure the balance between tackling presentation attacks and the performance drop on genuine users. Experimental simulations and theoretical expectations confirm that an appropriate “trade-off” definition allows a complete view of the sequential embedding potentials

    Wild Patterns: Ten Years After the Rise of Adversarial Machine Learning

    Get PDF
    Learning-based pattern classifiers, including deep networks, have shown impressive performance in several application domains, ranging from computer vision to cybersecurity. However, it has also been shown that adversarial input perturbations carefully crafted either at training or at test time can easily subvert their predictions. The vulnerability of machine learning to such wild patterns (also referred to as adversarial examples), along with the design of suitable countermeasures, have been investigated in the research field of adversarial machine learning. In this work, we provide a thorough overview of the evolution of this research area over the last ten years and beyond, starting from pioneering, earlier work on the security of non-deep learning algorithms up to more recent work aimed to understand the security properties of deep learning algorithms, in the context of computer vision and cybersecurity tasks. We report interesting connections between these apparently-different lines of work, highlighting common misconceptions related to the security evaluation of machine-learning algorithms. We review the main threat models and attacks defined to this end, and discuss the main limitations of current work, along with the corresponding future challenges towards the design of more secure learning algorithms.Comment: Accepted for publication on Pattern Recognition, 201

    Cognitive Identity Management: Risks, Trust and Decisions using Heterogeneous Sources

    Get PDF
    This work advocates for cognitive biometric-enabled systems that integrate identity management, risk assessment and trust assessment. The cognitive identity management process is viewed as a multi-state dynamical system, and probabilistic reasoning is used for modeling of this process. This paper describes an approach to design a platform for risk and trust modeling and evaluation in the cognitive identity management built upon processing heterogeneous data including biometrics, other sensory data and digital ID. The core of an approach is the perception-action cycle of each system state. Inference engine is a causal network that uses various uncertainty metrics and reasoning mechanisms including Dempster-Shafer and Dezert- Smarandache beliefs

    Fusion of fingerprint presentation attacks detection and matching: a real approach from the LivDet perspective

    Get PDF
    The liveness detection ability is explicitly required for current personal verification systems in many security applications. As a matter of fact, the project of any biometric verification system cannot ignore the vulnerability to spoofing or presentation attacks (PAs), which must be addressed by effective countermeasures from the beginning of the design process. However, despite significant improvements, especially by adopting deep learning approaches to fingerprint Presentation Attack Detectors (PADs), current research did not state much about their effectiveness when embedded in fingerprint verification systems. We believe that the lack of works is explained by the lack of instruments to investigate the problem, that is, modelling the cause-effect relationships when two systems (spoof detection and matching) with non-zero error rates are integrated. To solve this lack of investigations in the literature, we present in this PhD thesis a novel performance simulation model based on the probabilistic relationships between the Receiver Operating Characteristics (ROC) of the two systems when implemented sequentially. As a matter of fact, this is the most straightforward, flexible, and widespread approach. We carry out simulations on the PAD algorithms’ ROCs submitted to the editions of LivDet 2017-2019, the NIST Bozorth3, and the top-level VeriFinger 12.0 matchers. With the help of this simulator, the overall system performance can be predicted before actual implementation, thus simplifying the process of setting the best trade-off among error rates. In the second part of this thesis, we exploit this model to define a practical evaluation criterion to assess whether operational points of the PAD exist that do not alter the expected or previous performance given by the verification system alone. Experimental simulations coupled with the theoretical expectations confirm that this trade-off allows a complete view of the sequential embedding potentials worthy of being extended to other integration approaches

    Statistical meta-analysis of presentation attacks for secure multibiometric systems

    No full text
    Prior work has shown that multibiometric systems are vulnerable to presentation attacks, assuming that their matching score distribution is identical to that of genuine users, without fabricating any fake trait. We have recently shown that this assumption is not representative of current fingerprint and face presentation attacks, leading one to overestimate the vulnerability of multibiometric systems, and to design less effective fusion rules. In this paper, we overcome these limitations by proposing a statistical meta-model of face and fingerprint presentation attacks that characterizes a wider family of fake score distributions, including distributions of known and, potentially, unknown attacks. This allows us to perform a thorough security evaluation of multibiometric systems against presentation attacks, quantifying how their vulnerability may vary also under attacks that are different from those considered during design, through an uncertainty analysis. We empirically show that our approach can reliably predict the performance of multibiometric systems even under never-before-seen face and fingerprint presentation attacks, and that the secure fusion rules designed using our approach can exhibit an improved trade-off between the performance in the absence and in the presence of attack. We finally argue that our method can be extended to other biometrics besides faces and fingerprints

    Statistical Meta-Analysis of Presentation Attacks for Secure Multibiometric Systems

    No full text
    corecore