226 research outputs found

    Fingerprint Adversarial Presentation Attack in the Physical Domain

    Get PDF
    With the advent of the deep learning era, Fingerprint-based Authentication Systems (FAS) equipped with Fingerprint Presentation Attack Detection (FPAD) modules managed to avoid attacks on the sensor through artificial replicas of fingerprints. Previous works highlighted the vulnerability of FPADs to digital adversarial attacks. However, in a realistic scenario, the attackers may not have the possibility to directly feed a digitally perturbed image to the deep learning based FPAD, since the channel between the sensor and the FPAD is usually protected. In this paper we thus investigate the threat level associated with adversarial attacks against FPADs in the physical domain. By materially realising fakes from the adversarial images we were able to insert them into the system directly from the “exposed” part, the sensor. To the best of our knowledge, this represents the first proof-of-concept of a fingerprint adversarial presentation attack. We evaluated how much liveness score changed by feeding the system with the attacks using digital and printed adversarial images. To measure what portion of this increase is due to the printing itself, we also re-printed the original spoof images, without injecting any perturbation. Experiments conducted on the LivDet 2015 dataset demonstrate that the printed adversarial images achieve ∼ 100% attack success rate against an FPAD if the attacker has the ability to make multiple attacks on the sensor (10) and a fairly good result (∼ 28%) in a one-shot scenario. Despite this work must be considered as a proof-of-concept, it constitutes a promising pioneering attempt confirming that an adversarial presentation attack is feasible and dangerous

    White-Box Adversarial Attacks on Deep Learning-Based Radio Frequency Fingerprint Identification

    Full text link
    Radio frequency fingerprint identification (RFFI) is an emerging technique for the lightweight authentication of wireless Internet of things (IoT) devices. RFFI exploits unique hardware impairments as device identifiers, and deep learning is widely deployed as the feature extractor and classifier for RFFI. However, deep learning is vulnerable to adversarial attacks, where adversarial examples are generated by adding perturbation to clean data for causing the classifier to make wrong predictions. Deep learning-based RFFI has been shown to be vulnerable to such attacks, however, there is currently no exploration of effective adversarial attacks against a diversity of RFFI classifiers. In this paper, we report on investigations into white-box attacks (non-targeted and targeted) using two approaches, namely the fast gradient sign method (FGSM) and projected gradient descent (PGD). A LoRa testbed was built and real datasets were collected. These adversarial examples have been experimentally demonstrated to be effective against convolutional neural networks (CNNs), long short-term memory (LSTM) networks, and gated recurrent units (GRU).Comment: 6 pages, 9 figures, Accepeted by International Conference on Communications 202

    Adversarial Learning of Mappings Onto Regularized Spaces for Biometric Authentication

    Get PDF
    We present AuthNet: a novel framework for generic biometric authentication which, by learning a regularized mapping instead of a classification boundary, leads to higher performance and improved robustness. The biometric traits are mapped onto a latent space in which authorized and unauthorized users follow simple and well-behaved distributions. In turn, this enables simple and tunable decision boundaries to be employed in order to make a decision. We show that, differently from the deep learning and traditional template-based authentication systems, regularizing the latent space to simple target distributions leads to improved performance as measured in terms of Equal Error Rate (EER), accuracy, False Acceptance Rate (FAR) and Genuine Acceptance Rate (GAR). Extensive experiments on publicly available datasets of faces and fingerprints confirm the superiority of AuthNet over existing methods

    Fingerprint Presentation Attacks: Tackling the Ongoing Arms Race in Biometric Authentication

    Get PDF
    The widespread use of Automated Fingerprint Identification Systems (AFIS) in consumer electronics opens for the development of advanced presentation attacks, i.e. procedures designed to bypass an AFIS using a forged fingerprint. As a consequence, AFIS are often equipped with a fingerprint presentation attack detection (FPAD) module, to recognize live fingerprints from fake replicas, in order to both minimize the risk of unauthorized access and avoid pointless computations. The ongoing arms race between attackers and detector designers demands a comprehensive understanding of both the defender’s and attacker’s perspectives to develop robust and efficient FPAD systems. This paper proposes a dual-perspective approach to FPAD, which encompasses the presentation of a new technique for carrying out presentation attacks starting from perturbed samples with adversarial techniques and the presentation of a new detection technique based on an adversarial data augmentation strategy. In this case, attack and defence are based on the same assumptions demonstrating that this dual research approach can be exploited to enhance the overall security of fingerprint recognition systems against spoofing attacks

    Biometric Backdoors: A Poisoning Attack Against Unsupervised Template Updating

    Full text link
    In this work, we investigate the concept of biometric backdoors: a template poisoning attack on biometric systems that allows adversaries to stealthily and effortlessly impersonate users in the long-term by exploiting the template update procedure. We show that such attacks can be carried out even by attackers with physical limitations (no digital access to the sensor) and zero knowledge of training data (they know neither decision boundaries nor user template). Based on the adversaries' own templates, they craft several intermediate samples that incrementally bridge the distance between their own template and the legitimate user's. As these adversarial samples are added to the template, the attacker is eventually accepted alongside the legitimate user. To avoid detection, we design the attack to minimize the number of rejected samples. We design our method to cope with the weak assumptions for the attacker and we evaluate the effectiveness of this approach on state-of-the-art face recognition pipelines based on deep neural networks. We find that in scenarios where the deep network is known, adversaries can successfully carry out the attack over 70% of cases with less than ten injection attempts. Even in black-box scenarios, we find that exploiting the transferability of adversarial samples from surrogate models can lead to successful attacks in around 15% of cases. Finally, we design a poisoning detection technique that leverages the consistent directionality of template updates in feature space to discriminate between legitimate and malicious updates. We evaluate such a countermeasure with a set of intra-user variability factors which may present the same directionality characteristics, obtaining equal error rates for the detection between 7-14% and leading to over 99% of attacks being detected after only two sample injections.Comment: 12 page

    Defending against attacks on biometrics-based authentication

    Get PDF
    Many devices include biometrics-based user authentication in addition to secret-based authentication. While secret-based authentication involves a precise match with the known secret, biometrics-based authentication involves fuzzy matching that verifies that the input is similar to known biometrics within an acceptable threshold level of difference. As a result, biometrics-based authentication techniques are susceptible to attacks in which a malicious actor attempts to authenticate as the user via biometrics data that is crafted carefully to be similar to the stored biometrics within the threshold. The techniques of this disclosure guard against such attacks by use of a generative adversarial network (GAN) where random perturbation is added to the received biometrics input for a dynamically determined number of test iterations. The matching threshold value and the number of test iterations can be dynamically determined. If the authentication test during each of the iterations is passed by the perturbed biometrics input, the user providing the biometrics input is authenticated. Otherwise, the device falls back to secret-based authentication

    Securing CNN Model and Biometric Template using Blockchain

    Full text link
    Blockchain has emerged as a leading technology that ensures security in a distributed framework. Recently, it has been shown that blockchain can be used to convert traditional blocks of any deep learning models into secure systems. In this research, we model a trained biometric recognition system in an architecture which leverages the blockchain technology to provide fault tolerant access in a distributed environment. The advantage of the proposed approach is that tampering in one particular component alerts the whole system and helps in easy identification of `any' possible alteration. Experimentally, with different biometric modalities, we have shown that the proposed approach provides security to both deep learning model and the biometric template.Comment: Published in IEEE BTAS 201
    corecore