740 research outputs found

    Homomorphic Encryption for Speaker Recognition: Protection of Biometric Templates and Vendor Model Parameters

    Full text link
    Data privacy is crucial when dealing with biometric data. Accounting for the latest European data privacy regulation and payment service directive, biometric template protection is essential for any commercial application. Ensuring unlinkability across biometric service operators, irreversibility of leaked encrypted templates, and renewability of e.g., voice models following the i-vector paradigm, biometric voice-based systems are prepared for the latest EU data privacy legislation. Employing Paillier cryptosystems, Euclidean and cosine comparators are known to ensure data privacy demands, without loss of discrimination nor calibration performance. Bridging gaps from template protection to speaker recognition, two architectures are proposed for the two-covariance comparator, serving as a generative model in this study. The first architecture preserves privacy of biometric data capture subjects. In the second architecture, model parameters of the comparator are encrypted as well, such that biometric service providers can supply the same comparison modules employing different key pairs to multiple biometric service operators. An experimental proof-of-concept and complexity analysis is carried out on the data from the 2013-2014 NIST i-vector machine learning challenge

    Secure Speech Biometric Templates

    Get PDF

    Authentication under Constraints

    Get PDF
    Authentication has become a critical step to gain access to services such as on-line banking, e-commerce, transport systems and cars (contact-less keys). In several cases, however, the authentication process has to be performed under challenging conditions. This thesis is essentially a compendium of five papers which are the result of a two-year study on authentication in constrained settings. The two major constraints considered in this work are: (1) the noise and (2) the computational power. For what concerns authentication under noisy conditions, Paper A and Paper B ad- dress the case in which the noise is in the authentication credentials. More precisely, the aforementioned papers present attacks against biometric authentication systems, that exploit the inherent variant nature of biometric traits to gain information that should not be leaked by the system. Paper C and Paper D study proximity- based authentication, i.e., distance-bounding protocols. In this case, both of the constraints are present: the possible presence of noise in the channel (which affects communication and thus the authentication process), as well as resource constraints on the computational power and the storage space of the authenticating party (called the prover, e.g., an RFID tag). Finally, Paper E investigates how to achieve reliable verification of the authenticity of a digital signature, when the verifying party has limited computational power, and thus offloads part of the computations to an untrusted server. Throughout the presented research work, a special emphasis is given to privacy concerns risen by the constrained conditions

    Dictionary Attacks on Speaker Verification

    Get PDF
    In this paper, we propose dictionary attacks against speaker verification - a novel attack vector that aims to match a large fraction of speaker population by chance. We introduce a generic formulation of the attack that can be used with various speech representations and threat models. The attacker uses adversarial optimization to maximize raw similarity of speaker embeddings between a seed speech sample and a proxy population. The resulting master voice successfully matches a non-trivial fraction of people in an unknown population. Adversarial waveforms obtained with our approach can match on average 69% of females and 38% of males enrolled in the target system at a strict decision threshold calibrated to yield false alarm rate of 1%. By using the attack with a black-box voice cloning system, we obtain master voices that are effective in the most challenging conditions and transferable between speaker encoders. We also show that, combined with multiple attempts, this attack opens even more to serious issues on the security of these systems

    Poisoning Attacks on Learning-Based Keystroke Authentication and a Residue Feature Based Defense

    Get PDF
    Behavioral biometrics, such as keystroke dynamics, are characterized by relatively large variation in the input samples as compared to physiological biometrics such as fingerprints and iris. Recent advances in machine learning have resulted in behaviorbased pattern learning methods that obviate the effects of variation by mapping the variable behavior patterns to a unique identity with high accuracy. However, it has also exposed the learning systems to attacks that use updating mechanisms in learning by injecting imposter samples to deliberately drift the data to impostors’ patterns. Using the principles of adversarial drift, we develop a class of poisoning attacks, named Frog-Boiling attacks. The update samples are crafted with slow changes and random perturbations so that they can bypass the classifiers detection. Taking the case of keystroke dynamics which includes motoric and neurological learning, we demonstrate the success of our attack mechanism. We also present a detection mechanism for the frog-boiling attack that uses correlation between successive training samples to detect spurious input patterns. To measure the effect of adversarial drift in frog-boiling attack and the effectiveness of the proposed defense mechanism, we use traditional error rates such as FAR, FRR, and EER and the metric in terms of shifts in biometric menagerie

    A hybrid biometric template protection algorithm in fingerprint biometric system

    Get PDF
    Biometric recognition has achieved a considerable popularity in recent years due its various properties and widespread application in various sectors. These include very top priority sectors like countries boundary security, military, space missions, banks etc. Due to these reasons the stealing of biometric information is a critical issue. To protect this user biometric template information there should be efficient biometric template transformation technique and thereby the privacy of user is preserved. Non-invertible transformation can keep the user template based transformed information maximum secure against the regeneration. But the performance of non-invertible template protection mechanism will be reduced by the increase in security. This limitation of non-invertible biometric transformation should be solved. This research aims to develop a hybrid biometric template protection algorithm to keep up a balance between security and performance in fingerprint biometric system. The hybrid biometric template protection algorithm is developed from the combination of non-invertible biometric transformation and biometric key generation techniques. To meet the research objective this proposed framework composed of three phases: First phase focus on the extraction of fingerprint minutiae and formation of vector table, while second phase focus on develop a hybrid biometric template protection algorithm and finally the third phase focus on evaluation of performance of the proposed algorithm

    Vulnerabilities and attack protection in security systems based on biometric recognition

    Full text link
    Tesis doctoral inédita. Universidad Autónoma de Madrid, Escuela Politécnica Superior, noviembre de 200

    Face Liveness Detection under Processed Image Attacks

    Get PDF
    Face recognition is a mature and reliable technology for identifying people. Due to high-definition cameras and supporting devices, it is considered the fastest and the least intrusive biometric recognition modality. Nevertheless, effective spoofing attempts on face recognition systems were found to be possible. As a result, various anti-spoofing algorithms were developed to counteract these attacks. They are commonly referred in the literature a liveness detection tests. In this research we highlight the effectiveness of some simple, direct spoofing attacks, and test one of the current robust liveness detection algorithms, i.e. the logistic regression based face liveness detection from a single image, proposed by the Tan et al. in 2010, against malicious attacks using processed imposter images. In particular, we study experimentally the effect of common image processing operations such as sharpening and smoothing, as well as corruption with salt and pepper noise, on the face liveness detection algorithm, and we find that it is especially vulnerable against spoofing attempts using processed imposter images. We design and present a new facial database, the Durham Face Database, which is the first, to the best of our knowledge, to have client, imposter as well as processed imposter images. Finally, we evaluate our claim on the effectiveness of proposed imposter image attacks using transfer learning on Convolutional Neural Networks. We verify that such attacks are more difficult to detect even when using high-end, expensive machine learning techniques
    corecore