145 research outputs found

    Security of multimodal biometric systems against spoof attacks

    Get PDF
    A biometric system is essentially a pattern recognition system being used in ad-versarial environment. Since, biometric system like any conventional security system is exposed to malicious adversaries, who can manipulate data to make the system ineffective by compromising its integrity. Current theory and de- sign methods of biometric systems do not take into account the vulnerability to such adversary attacks. Therefore, evaluation of classical design methods is an open problem to investigate whether they lead to design secure systems. In order to make biometric systems secure it is necessary to understand and evalu-ate the threats and to thus develop effective countermeasures and robust system designs, both technical and procedural, if necessary. Accordingly, the extension of theory and design methods of biometric systems is mandatory to safeguard the security and reliability of biometric systems in adversarial environments. In this thesis, we provide some contributions towards this direction. Among all the potential attacks discussed in the literature, spoof attacks are one of the main threats against the security of biometric systems for identity recognition. Multimodal biometric systems are commonly believed to be in-trinsically more robust to spoof attacks than systems based on a single biomet-ric trait, as they combine information coming from different biometric traits. However, recent works have question such belief and shown that multimodal systems can be misled by an attacker (impostor) even by spoofing only one of the biometric traits. Therefore, we first provide a detailed review of state-of-the-art works in multimodal biometric systems against spoof attacks. The scope ofstate-of-the-art results is very limited, since they were obtained under a very restrictive “worst-case” hypothesis, where the attacker is assumed to be able to fabricate a perfect replica of a biometric trait whose matching score distribu-tion is identical to the one of genuine traits. Thus, we argue and investigate the validity of “worst-case” hypothesis using large set of real spoof attacks and provide empirical evidence that “worst-case” scenario can not be representa- ixtive of real spoof attacks: its suitability may depend on the specific biometric trait, the matching algorithm, and the techniques used to counterfeit the spoofed traits. Then, we propose a security evaluation methodology of biometric systems against spoof attacks that can be used in real applications, as it does not require fabricating fake biometric traits, it allows the designer to take into account the different possible qualities of fake traits used by different attackers, and it exploits only information on genuine and impostor samples which is col- lected for the training of a biometric system. Our methodology evaluates the performances under a simulated spoof attack using model of the fake score distribution that takes into account explicitly different degrees of the quality of fake biometric traits. In particular, we propose two models of the match score distribution of fake traits that take into account all different factors which can affect the match score distribution of fake traits like the particular spoofed biometric, the sensor, the algorithm for matching score computation, the technique used to construct fake biometrics, and the skills of the attacker. All these factors are summarized in a single parameter, that we call “attack strength”. Further, we propose extension of our security evaluation method to rank several biometric score fusion rules according to their relative robustness against spoof attacks. This method allows the designer to choose the most robust rule according to the method prediction. We then present empirical analysis, using data sets of face and fingerprints including real spoofed traits, to show that our proposed models provide a good approximation of fake traits’ score distribution and our method thus providing an adequate estimation of the security1 of biometric systems against spoof attacks. We also use our method to show how to evaluate the security of different multimodal systems on publicly available benchmark data sets without spoof attacks. Our experimental results show that robustness of multimodal biometric systems to spoof attacks strongly depends on the particular matching algorithm, the score fusion rule, and the attack strength of fake traits. We eventually present evidence, considering a multimodal system based on face and fingerprint biometrics, that the proposed methodology to rank score fusion rules is capable of providing correct ranking of score fusion rules under spoof attacks

    Efficient software attack to multimodal biometric systems and its application to face and iris fusion

    Full text link
    This is the author’s version of a work that was accepted for publication in Pattern Recognition Letters. Changes resulting from the publishing process, such as peer review, editing, corrections, structural formatting, and other quality control mechanisms may not be reflected in this document. Changes may have been made to this work since it was submitted for publication. A definitive version was subsequently published in Pattern Recognition Letters 36, (2014) DOI: 10.1016/j.patrec.2013.04.029In certain applications based on multimodal interaction it may be crucial to determine not only what the user is doing (commands), but who is doing it, in order to prevent fraudulent use of the system. The biometric technology, and particularly the multimodal biometric systems, represent a highly efficient automatic recognition solution for this type of applications. Although multimodal biometric systems have been traditionally regarded as more secure than unimodal systems, their vulnerabilities to spoofing attacks have been recently shown. New fusion techniques have been proposed and their performance thoroughly analysed in an attempt to increase the robustness of multimodal systems to these spoofing attacks. However, the vulnerabilities of multimodal approaches to software-based attacks still remain unexplored. In this work we present the first software attack against multimodal biometric systems. Its performance is tested against a multimodal system based on face and iris, showing the vulnerabilities of the system to this new type of threat. Score quantization is afterwards studied as a possible countermeasure, managing to cancel the effects of the proposed attacking methodology under certain scenarios.This work has been partially supported by projects Contexts (S2009/TIC-1485) from CAM, Bio-Challenge (TEC2009-11186) and Bio-Shield (TEC2012-34881) from Spanish MINECO, TABULA RASA (FP7-ICT-257289) and BEAT (FP7-SEC-284989) from EU, and Cátedra UAM-Telefónica

    Multi-modal association learning using spike-timing dependent plasticity (STDP)

    Get PDF
    We propose an associative learning model that can integrate facial images with speech signals to target a subject in a reinforcement learning (RL) paradigm. Through this approach, the rules of learning will involve associating paired stimuli (stimulus–stimulus, i.e., face–speech), which is also known as predictor-choice pairs. Prior to a learning simulation, we extract the features of the biometrics used in the study. For facial features, we experiment by using two approaches: principal component analysis (PCA)-based Eigenfaces and singular value decomposition (SVD). For speech features, we use wavelet packet decomposition (WPD). The experiments show that the PCA-based Eigenfaces feature extraction approach produces better results than SVD. We implement the proposed learning model by using the Spike- Timing-Dependent Plasticity (STDP) algorithm, which depends on the time and rate of pre-post synaptic spikes. The key contribution of our study is the implementation of learning rules via STDP and firing rate in spatiotemporal neural networks based on the Izhikevich spiking model. In our learning, we implement learning for response group association by following the reward-modulated STDP in terms of RL, wherein the firing rate of the response groups determines the reward that will be given. We perform a number of experiments that use existing face samples from the Olivetti Research Laboratory (ORL) dataset, and speech samples from TIDigits. After several experiments and simulations are performed to recognize a subject, the results show that the proposed learning model can associate the predictor (face) with the choice (speech) at optimum performance rates of 77.26% and 82.66% for training and testing, respectively. We also perform learning by using real data, that is, an experiment is conducted on a sample of face–speech data, which have been collected in a manner similar to that of the initial data. The performance results are 79.11% and 77.33% for training and testing, respectively. Based on these results, the proposed learning model can produce high learning performance in terms of combining heterogeneous data (face–speech). This finding opens possibilities to expand RL in the field of biometric authenticatio

    Robustness analysis of Likelihood Ratio score fusion rule for multimodal biometric systems under spoofing attacks

    Get PDF
    Abstract-Recent works have shown that, contrary to a common belief, multi-modal biometric systems may be "forced" by an impostor by submitting a spoofed biometric replica of a genuine user to only one of the matchers. Although those results were obtained under a worst-case scenario when the attacker is able to replicate the exact appearance of the true biometric, this raises the issue of investigating more thoroughly the robustness of multimodal systems against spoof attacks and devising new methods to design robust systems against them. To this aim, in this paper we propose a robustness evaluation method which takes into account also scenarios more realistic than the worst-case one. Our method is based on an analytical model of the score distribution of fake traits, which is assumed to lie between the one of genuine and impostor scores, and is parametrised by a measure of the relative distance to the distribution of impostor scores, we name "fake strength". Varying the value of such parameter allows one to simulate the different factors which can affect the distribution of fake scores, like the ability of the attacker to replicate a certain biometric. Preliminary experimental results on real bimodal biometric data sets made up of faces and fingerprints show that the widely used LLR rule can be highly vulnerable to spoof attacks against one only matcher, even when the attack has a low fake strength

    Audio-Video Person Authenticate Based on 3D Facial Feature Warping

    Get PDF
    • …
    corecore