1,030 research outputs found

    Biometric Backdoors: A Poisoning Attack Against Unsupervised Template Updating

    Full text link
    In this work, we investigate the concept of biometric backdoors: a template poisoning attack on biometric systems that allows adversaries to stealthily and effortlessly impersonate users in the long-term by exploiting the template update procedure. We show that such attacks can be carried out even by attackers with physical limitations (no digital access to the sensor) and zero knowledge of training data (they know neither decision boundaries nor user template). Based on the adversaries' own templates, they craft several intermediate samples that incrementally bridge the distance between their own template and the legitimate user's. As these adversarial samples are added to the template, the attacker is eventually accepted alongside the legitimate user. To avoid detection, we design the attack to minimize the number of rejected samples. We design our method to cope with the weak assumptions for the attacker and we evaluate the effectiveness of this approach on state-of-the-art face recognition pipelines based on deep neural networks. We find that in scenarios where the deep network is known, adversaries can successfully carry out the attack over 70% of cases with less than ten injection attempts. Even in black-box scenarios, we find that exploiting the transferability of adversarial samples from surrogate models can lead to successful attacks in around 15% of cases. Finally, we design a poisoning detection technique that leverages the consistent directionality of template updates in feature space to discriminate between legitimate and malicious updates. We evaluate such a countermeasure with a set of intra-user variability factors which may present the same directionality characteristics, obtaining equal error rates for the detection between 7-14% and leading to over 99% of attacks being detected after only two sample injections.Comment: 12 page

    Frictionless Authentication Systems: Emerging Trends, Research Challenges and Opportunities

    Get PDF
    Authentication and authorization are critical security layers to protect a wide range of online systems, services and content. However, the increased prevalence of wearable and mobile devices, the expectations of a frictionless experience and the diverse user environments will challenge the way users are authenticated. Consumers demand secure and privacy-aware access from any device, whenever and wherever they are, without any obstacles. This paper reviews emerging trends and challenges with frictionless authentication systems and identifies opportunities for further research related to the enrollment of users, the usability of authentication schemes, as well as security and privacy trade-offs of mobile and wearable continuous authentication systems.Comment: published at the 11th International Conference on Emerging Security Information, Systems and Technologies (SECURWARE 2017

    Security Evaluation of Support Vector Machines in Adversarial Environments

    Full text link
    Support Vector Machines (SVMs) are among the most popular classification techniques adopted in security applications like malware detection, intrusion detection, and spam filtering. However, if SVMs are to be incorporated in real-world security systems, they must be able to cope with attack patterns that can either mislead the learning algorithm (poisoning), evade detection (evasion), or gain information about their internal parameters (privacy breaches). The main contributions of this chapter are twofold. First, we introduce a formal general framework for the empirical evaluation of the security of machine-learning systems. Second, according to our framework, we demonstrate the feasibility of evasion, poisoning and privacy attacks against SVMs in real-world security problems. For each attack technique, we evaluate its impact and discuss whether (and how) it can be countered through an adversary-aware design of SVMs. Our experiments are easily reproducible thanks to open-source code that we have made available, together with all the employed datasets, on a public repository.Comment: 47 pages, 9 figures; chapter accepted into book 'Support Vector Machine Applications

    Poisoning Attacks on Learning-Based Keystroke Authentication and a Residue Feature Based Defense

    Get PDF
    Behavioral biometrics, such as keystroke dynamics, are characterized by relatively large variation in the input samples as compared to physiological biometrics such as fingerprints and iris. Recent advances in machine learning have resulted in behaviorbased pattern learning methods that obviate the effects of variation by mapping the variable behavior patterns to a unique identity with high accuracy. However, it has also exposed the learning systems to attacks that use updating mechanisms in learning by injecting imposter samples to deliberately drift the data to impostors’ patterns. Using the principles of adversarial drift, we develop a class of poisoning attacks, named Frog-Boiling attacks. The update samples are crafted with slow changes and random perturbations so that they can bypass the classifiers detection. Taking the case of keystroke dynamics which includes motoric and neurological learning, we demonstrate the success of our attack mechanism. We also present a detection mechanism for the frog-boiling attack that uses correlation between successive training samples to detect spurious input patterns. To measure the effect of adversarial drift in frog-boiling attack and the effectiveness of the proposed defense mechanism, we use traditional error rates such as FAR, FRR, and EER and the metric in terms of shifts in biometric menagerie
    corecore