20 research outputs found

    On the performance of helper data template protection schemes

    Get PDF
    The use of biometrics looks promising as it is already being applied in elec- tronic passports, ePassports, on a global scale. Because the biometric data has to be stored as a reference template on either a central or personal storage de- vice, its wide-spread use introduces new security and privacy risks such as (i) identity fraud, (ii) cross-matching, (iii) irrevocability and (iv) leaking sensitive medical information. Mitigating these risks is essential to obtain the accep- tance from the subjects of the biometric systems and therefore facilitating the successful implementation on a large-scale basis. A solution to mitigate these risks is to use template protection techniques. The required protection properties of the stored reference template according to ISO guidelines are (i) irreversibility, (ii) renewability and (iii) unlinkability. A known template protection scheme is the helper data system (HDS). The fun- damental principle of the HDS is to bind a key with the biometric sample with use of helper data and cryptography, as such that the key can be reproduced or released given another biometric sample of the same subject. The identity check is then performed in a secure way by comparing the hash of the key. Hence, the size of the key determines the amount of protection. This thesis extensively investigates the HDS system, namely (i) the the- oretical classication performance, (ii) the maximum key size, (iii) the irre- versibility and unlinkability properties, and (iv) the optimal multi-sample and multi-algorithm fusion method. The theoretical classication performance of the biometric system is deter- mined by assuming that the features extracted from the biometric sample are Gaussian distributed. With this assumption we investigate the in uence of the bit extraction scheme on the classication performance. With use of the the- oretical framework, the maximum size of the key is determined by assuming the error-correcting code to operate on Shannon's bound. We also show three vulnerabilities of HDS that aect the irreversibility and unlinkability property and propose solutions. Finally, we study the optimal level of applying multi- sample and multi-algorithm fusion with the HDS at either feature-, score-, or decision-level

    Binary Biometrics: An Analytic Framework to Estimate the Bit Error Probability under Gaussian Assumption

    Get PDF
    In recent years the protection of biometric data has gained increased interest from the scientific community. Methods such as the helper data system, fuzzy extractors, fuzzy vault and cancellable biometrics have been proposed for protecting biometric data. Most of these methods use cryptographic primitives and require a binary representation from the real-valued biometric data. Hence, the similarity of biometric samples is measured in terms of the Hamming distance between the binary vector obtained at the enrolment and verification phase. The number of errors depends on the expected error probability Pe of each bit between two biometric samples of the same subject. In this paper we introduce a framework for analytically estimating Pe under the assumption that the within-and between-class distribution can be modeled by a Gaussian distribution. We present the analytic expression of Pe as a function of the number of samples used at the enrolment (Ne) and verification (Nv) phases. The analytic expressions are validated using the FRGC v2 and FVC2000 biometric databases

    Pitfall of the Detection Rate Optimized Bit Allocation within template protection and a remedy

    Get PDF
    One of the requirements of a biometric template protection system is that the protected template ideally should not leak any information about the biometric sample or its derivatives. In the literature, several proposed template protection techniques are based on binary vectors. Hence, they require the extraction of a binary representation from the real- valued biometric sample. In this work we focus on the Detection Rate Optimized Bit Allocation (DROBA) quantization scheme that extracts multiple bits per feature component while maximizing the overall detection rate. The allocation strategy has to be stored as auxiliary data for reuse in the verification phase and is considered as public. This implies that the auxiliary data should not leak any information about the extracted binary representation. Experiments in our work show that the original DROBA algorithm, as known in the literature, creates auxiliary data that leaks a significant amount of information. We show how an adversary is able to exploit this information and significantly increase its success rate on obtaining a false accept. Fortunately, the information leakage can be mitigated by restricting the allocation freedom of the DROBA algorithm. We propose a method based on population statistics and empirically illustrate its effectiveness. All the experiments are based on the MCYT fingerprint database using two different texture based feature extraction algorithms

    Multi-sample fusion with template protection

    Get PDF
    The widespread use of biometries and its increased popularity introduces privacy risks. In order to mitigate these risks, solutions such as the helper-data system, fuzzy vault, fuzzy extractors, and cancelable biometries were introduced, also known as the field of template proteetion. Besides these developments, fusion of multiple sources of biometrie information have shown to improve the verification performance of the biometrie system. Our work eonsists of analyzing feature-level fusion in the context of the template proteetion framework using the helper-data system. We verify the results using the FRGC v2 database and two feature extraction algorithms

    Maximum Key Size and Classification Performance of Fuzzy Commitment for Gaussian Modeled Biometric Sources

    Get PDF
    Template protection techniques are used within biometric systems in order to protect the stored biometric template against privacy and security threats. A great portion of template protection techniques are based on extracting a key from, or binding a key to the binary vector derived from the biometric sample. The size of the key plays an important role, as the achieved privacy and security mainly depend on the entropy of the key. In the literature, it can be observed that there is a large variation on the reported key lengths at similar classification performance of the same template protection system, even when based on the same biometric modality and database. In this work, we determine the analytical relationship between the classification performance of the fuzzy commitment scheme and the theoretical maximum key size given as input a Gaussian biometric source. We show the effect of the system parameters such as the biometric source capacity, the number of feature components, the number of enrolment and verification samples, and the target performance on the maximum key size. Furthermore, we provide an analysis of the effect of feature interdependencies on the estimated maximum key size and classification performance. Both the theoretical analysis, as well as an experimental evaluation using the MCYT fingerprint database showed that feature interdependencies have a large impact on performance and key size estimates. This property can explain the large deviation in reported key sizes in literature

    Multi-algorithm fusion with template protection

    Get PDF
    The popularity of biometrics and its widespread use introduces privacy risks. To mitigate these risks, solutions such as the helper-data system, fuzzy vault, fuzzy extractors, and cancelable biometrics were introduced, also known as the field of template protection. In parallel to these developments, fusion of multiple sources of biometric information have shown to improve the verification performance of the biometric system. In this work we analyze fusion of the protected template from two 3D recognition algorithms (multi-algorithm fusion) at feature-, score-, and decision-level. We show that fusion can be applied at the known fusion-levels with the template protection technique known as the Helper-Data System. We also illustrate the required changes of the Helper-Data System and its corresponding limitations. Furthermore, our experimental results, based on 3D face range images of the FRGC v2 dataset, show that indeed fusion improves the verification performance

    Preventing the Decodability Attack Based Cross-Matching in a Fuzzy Commitment Scheme

    Get PDF
    Template protection techniques are used within biometric systems in order to safeguard the privacy of the system's subjects. This protection also includes unlinkability, i.e., preventing cross-matching between two or more reference templates from the same subject across different applications. In the literature, the template protection techniques based on fuzzy commitment, also known as the code-offset construction, have recently been investigated. Recent work presented the decodability attack vulnerability facilitating cross-matching based on the protected templates and its theoretical analysis. First, we extend the theoretical analysis and include the comparison between the system and cross-matching performance. We validate the presented analysis using real biometric data from the MCYT fingerprint database. Second, we show that applying a random bit-permutation process secures the fuzzy commitment scheme from cross-matching based on the decodability attack

    Advanced Comparison Techniques for Challenging Iris Images

    No full text
    corecore