243,890 research outputs found

    Dynamic fingerprint statistics: Application in presentation attack detection

    Get PDF
    Fingerprint recognition systems have proven significant performance in many services such as forensics, border control, and mobile applications. Even though fingerprint systems have shown high accuracy and user acceptance, concerns have raised questions about the possibility of having our fingerprint pattern stolen and presented to the system by an imposter. In this paper, we propose a dynamic presentation attack detection mechanism that seeks to mitigate presentation attacks. The adopted mechanism extracts the variation of global fingerprint features in video acquisition scenario and uses it to distinguish bona fide from attack presentations. For that purpose, a dynamic dataset has been collected from 11 independent subjects, 6 fingerprints per user, using thermal and optical sensors. A total of 792 bona fide presentations and 2772 attack presentations are collected. The final PAD subsystem is evaluated based on the standard ISO/. Considering SVM classification and 3 folds cross validation, the obtained error rates at 5% APCER are 18.1% BPCER for the thermal subset and 19.5% BPCER for the optical subset.This work was supported by the European Union's Horizon 2020 for Research and Innovation Program under Grant 675087 (AMBER)

    Efficient software attack to multimodal biometric systems and its application to face and iris fusion

    Full text link
    This is the author’s version of a work that was accepted for publication in Pattern Recognition Letters. Changes resulting from the publishing process, such as peer review, editing, corrections, structural formatting, and other quality control mechanisms may not be reflected in this document. Changes may have been made to this work since it was submitted for publication. A definitive version was subsequently published in Pattern Recognition Letters 36, (2014) DOI: 10.1016/j.patrec.2013.04.029In certain applications based on multimodal interaction it may be crucial to determine not only what the user is doing (commands), but who is doing it, in order to prevent fraudulent use of the system. The biometric technology, and particularly the multimodal biometric systems, represent a highly efficient automatic recognition solution for this type of applications. Although multimodal biometric systems have been traditionally regarded as more secure than unimodal systems, their vulnerabilities to spoofing attacks have been recently shown. New fusion techniques have been proposed and their performance thoroughly analysed in an attempt to increase the robustness of multimodal systems to these spoofing attacks. However, the vulnerabilities of multimodal approaches to software-based attacks still remain unexplored. In this work we present the first software attack against multimodal biometric systems. Its performance is tested against a multimodal system based on face and iris, showing the vulnerabilities of the system to this new type of threat. Score quantization is afterwards studied as a possible countermeasure, managing to cancel the effects of the proposed attacking methodology under certain scenarios.This work has been partially supported by projects Contexts (S2009/TIC-1485) from CAM, Bio-Challenge (TEC2009-11186) and Bio-Shield (TEC2012-34881) from Spanish MINECO, TABULA RASA (FP7-ICT-257289) and BEAT (FP7-SEC-284989) from EU, and Cátedra UAM-Telefónica

    Wild Patterns: Ten Years After the Rise of Adversarial Machine Learning

    Get PDF
    Learning-based pattern classifiers, including deep networks, have shown impressive performance in several application domains, ranging from computer vision to cybersecurity. However, it has also been shown that adversarial input perturbations carefully crafted either at training or at test time can easily subvert their predictions. The vulnerability of machine learning to such wild patterns (also referred to as adversarial examples), along with the design of suitable countermeasures, have been investigated in the research field of adversarial machine learning. In this work, we provide a thorough overview of the evolution of this research area over the last ten years and beyond, starting from pioneering, earlier work on the security of non-deep learning algorithms up to more recent work aimed to understand the security properties of deep learning algorithms, in the context of computer vision and cybersecurity tasks. We report interesting connections between these apparently-different lines of work, highlighting common misconceptions related to the security evaluation of machine-learning algorithms. We review the main threat models and attacks defined to this end, and discuss the main limitations of current work, along with the corresponding future challenges towards the design of more secure learning algorithms.Comment: Accepted for publication on Pattern Recognition, 201

    Biometric presentation attack detection: beyond the visible spectrum

    Full text link
    The increased need for unattended authentication in multiple scenarios has motivated a wide deployment of biometric systems in the last few years. This has in turn led to the disclosure of security concerns specifically related to biometric systems. Among them, presentation attacks (PAs, i.e., attempts to log into the system with a fake biometric characteristic or presentation attack instrument) pose a severe threat to the security of the system: any person could eventually fabricate or order a gummy finger or face mask to impersonate someone else. In this context, we present a novel fingerprint presentation attack detection (PAD) scheme based on i) a new capture device able to acquire images within the short wave infrared (SWIR) spectrum, and i i) an in-depth analysis of several state-of-theart techniques based on both handcrafted and deep learning features. The approach is evaluated on a database comprising over 4700 samples, stemming from 562 different subjects and 35 different presentation attack instrument (PAI) species. The results show the soundness of the proposed approach with a detection equal error rate (D-EER) as low as 1.35% even in a realistic scenario where five different PAI species are considered only for testing purposes (i.e., unknown attacks
    corecore