86 research outputs found

    Robustness analysis of Likelihood Ratio score fusion rule for multimodal biometric systems under spoofing attacks

    Get PDF
    Abstract-Recent works have shown that, contrary to a common belief, multi-modal biometric systems may be "forced" by an impostor by submitting a spoofed biometric replica of a genuine user to only one of the matchers. Although those results were obtained under a worst-case scenario when the attacker is able to replicate the exact appearance of the true biometric, this raises the issue of investigating more thoroughly the robustness of multimodal systems against spoof attacks and devising new methods to design robust systems against them. To this aim, in this paper we propose a robustness evaluation method which takes into account also scenarios more realistic than the worst-case one. Our method is based on an analytical model of the score distribution of fake traits, which is assumed to lie between the one of genuine and impostor scores, and is parametrised by a measure of the relative distance to the distribution of impostor scores, we name "fake strength". Varying the value of such parameter allows one to simulate the different factors which can affect the distribution of fake scores, like the ability of the attacker to replicate a certain biometric. Preliminary experimental results on real bimodal biometric data sets made up of faces and fingerprints show that the widely used LLR rule can be highly vulnerable to spoof attacks against one only matcher, even when the attack has a low fake strength

    Security of multimodal biometric systems against spoof attacks

    Get PDF
    A biometric system is essentially a pattern recognition system being used in ad-versarial environment. Since, biometric system like any conventional security system is exposed to malicious adversaries, who can manipulate data to make the system ineffective by compromising its integrity. Current theory and de- sign methods of biometric systems do not take into account the vulnerability to such adversary attacks. Therefore, evaluation of classical design methods is an open problem to investigate whether they lead to design secure systems. In order to make biometric systems secure it is necessary to understand and evalu-ate the threats and to thus develop effective countermeasures and robust system designs, both technical and procedural, if necessary. Accordingly, the extension of theory and design methods of biometric systems is mandatory to safeguard the security and reliability of biometric systems in adversarial environments. In this thesis, we provide some contributions towards this direction. Among all the potential attacks discussed in the literature, spoof attacks are one of the main threats against the security of biometric systems for identity recognition. Multimodal biometric systems are commonly believed to be in-trinsically more robust to spoof attacks than systems based on a single biomet-ric trait, as they combine information coming from different biometric traits. However, recent works have question such belief and shown that multimodal systems can be misled by an attacker (impostor) even by spoofing only one of the biometric traits. Therefore, we first provide a detailed review of state-of-the-art works in multimodal biometric systems against spoof attacks. The scope ofstate-of-the-art results is very limited, since they were obtained under a very restrictive “worst-case” hypothesis, where the attacker is assumed to be able to fabricate a perfect replica of a biometric trait whose matching score distribu-tion is identical to the one of genuine traits. Thus, we argue and investigate the validity of “worst-case” hypothesis using large set of real spoof attacks and provide empirical evidence that “worst-case” scenario can not be representa- ixtive of real spoof attacks: its suitability may depend on the specific biometric trait, the matching algorithm, and the techniques used to counterfeit the spoofed traits. Then, we propose a security evaluation methodology of biometric systems against spoof attacks that can be used in real applications, as it does not require fabricating fake biometric traits, it allows the designer to take into account the different possible qualities of fake traits used by different attackers, and it exploits only information on genuine and impostor samples which is col- lected for the training of a biometric system. Our methodology evaluates the performances under a simulated spoof attack using model of the fake score distribution that takes into account explicitly different degrees of the quality of fake biometric traits. In particular, we propose two models of the match score distribution of fake traits that take into account all different factors which can affect the match score distribution of fake traits like the particular spoofed biometric, the sensor, the algorithm for matching score computation, the technique used to construct fake biometrics, and the skills of the attacker. All these factors are summarized in a single parameter, that we call “attack strength”. Further, we propose extension of our security evaluation method to rank several biometric score fusion rules according to their relative robustness against spoof attacks. This method allows the designer to choose the most robust rule according to the method prediction. We then present empirical analysis, using data sets of face and fingerprints including real spoofed traits, to show that our proposed models provide a good approximation of fake traits’ score distribution and our method thus providing an adequate estimation of the security1 of biometric systems against spoof attacks. We also use our method to show how to evaluate the security of different multimodal systems on publicly available benchmark data sets without spoof attacks. Our experimental results show that robustness of multimodal biometric systems to spoof attacks strongly depends on the particular matching algorithm, the score fusion rule, and the attack strength of fake traits. We eventually present evidence, considering a multimodal system based on face and fingerprint biometrics, that the proposed methodology to rank score fusion rules is capable of providing correct ranking of score fusion rules under spoof attacks

    Multimodal biometric authentication based on voice, fingerprint and face recognition

    Get PDF
    openNew decison module to combine the score of voice, fingerprint and face recognition in a multimodal biometric system.New decison module to combine the score of voice, fingerprint and face recognition in a multimodal biometric system

    Biometric Spoofing: A JRC Case Study in 3D Face Recognition

    Get PDF
    Based on newly available and affordable off-the-shelf 3D sensing, processing and printing technologies, the JRC has conducted a comprehensive study on the feasibility of spoofing 3D and 2.5D face recognition systems with low-cost self-manufactured models and presents in this report a systematic and rigorous evaluation of the real risk posed by such attacking approach which has been complemented by a test campaign. The work accomplished and presented in this report, covers theories, methodologies, state of the art techniques, evaluation databases and also aims at providing an outlook into the future of this extremely active field of research.JRC.G.6-Digital Citizen Securit

    Anti-spoofing in action: joint operation with a verification system

    Get PDF
    Besides the recognition task, today's biometric systems need to cope with additional problem: spoofing attacks. Up to date, academic research considers spoofing as a binary classification problem: systems are trained to discriminate between real accesses and attacks. However, spoofing counter-measures are not designated to operate stand-alone, but as a part of a recognition system they will protect. In this paper, we study techniques for decision-level and score-level fusion to integrate a recognition and anti-spoofing systems, using an open-source framework that handles the ternary classification problem (clients, impostors and attacks) transparently. By doing so, we are able to report the impact of different spoofing counter-measures, fusion techniques and thresholding on the overall performance of the final recognition system. For a specific use-case covering face verification, experiments show to what extent simple fusion improves the trustworthiness of the system when exposed to spoofing attacks

    Face Liveness Detection under Processed Image Attacks

    Get PDF
    Face recognition is a mature and reliable technology for identifying people. Due to high-definition cameras and supporting devices, it is considered the fastest and the least intrusive biometric recognition modality. Nevertheless, effective spoofing attempts on face recognition systems were found to be possible. As a result, various anti-spoofing algorithms were developed to counteract these attacks. They are commonly referred in the literature a liveness detection tests. In this research we highlight the effectiveness of some simple, direct spoofing attacks, and test one of the current robust liveness detection algorithms, i.e. the logistic regression based face liveness detection from a single image, proposed by the Tan et al. in 2010, against malicious attacks using processed imposter images. In particular, we study experimentally the effect of common image processing operations such as sharpening and smoothing, as well as corruption with salt and pepper noise, on the face liveness detection algorithm, and we find that it is especially vulnerable against spoofing attempts using processed imposter images. We design and present a new facial database, the Durham Face Database, which is the first, to the best of our knowledge, to have client, imposter as well as processed imposter images. Finally, we evaluate our claim on the effectiveness of proposed imposter image attacks using transfer learning on Convolutional Neural Networks. We verify that such attacks are more difficult to detect even when using high-end, expensive machine learning techniques
    corecore