724 research outputs found

    Fast and Accurate 3D Face Recognition Using Registration to an Intrinsic Coordinate System and Fusion of Multiple Region classifiers

    Get PDF
    In this paper we present a new robust approach for 3D face registration to an intrinsic coordinate system of the face. The intrinsic coordinate system is defined by the vertical symmetry plane through the nose, the tip of the nose and the slope of the bridge of the nose. In addition, we propose a 3D face classifier based on the fusion of many dependent region classifiers for overlapping face regions. The region classifiers use PCA-LDA for feature extraction and the likelihood ratio as a matching score. Fusion is realised using straightforward majority voting for the identification scenario. For verification, a voting approach is used as well and the decision is defined by comparing the number of votes to a threshold. Using the proposed registration method combined with a classifier consisting of 60 fused region classifiers we obtain a 99.0% identification rate on the all vs first identification test of the FRGC v2 data. A verification rate of 94.6% at FAR=0.1% was obtained for the all vs all verification test on the FRGC v2 data using fusion of 120 region classifiers. The first is the highest reported performance and the second is in the top-5 of best performing systems on these tests. In addition, our approach is much faster than other methods, taking only 2.5 seconds per image for registration and less than 0.1 ms per comparison. Because we apply feature extraction using PCA and LDA, the resulting template size is also very small: 6 kB for 60 region classifiers

    Binary Biometrics: An Analytic Framework to Estimate the Bit Error Probability under Gaussian Assumption

    Get PDF
    In recent years the protection of biometric data has gained increased interest from the scientific community. Methods such as the helper data system, fuzzy extractors, fuzzy vault and cancellable biometrics have been proposed for protecting biometric data. Most of these methods use cryptographic primitives and require a binary representation from the real-valued biometric data. Hence, the similarity of biometric samples is measured in terms of the Hamming distance between the binary vector obtained at the enrolment and verification phase. The number of errors depends on the expected error probability Pe of each bit between two biometric samples of the same subject. In this paper we introduce a framework for analytically estimating Pe under the assumption that the within-and between-class distribution can be modeled by a Gaussian distribution. We present the analytic expression of Pe as a function of the number of samples used at the enrolment (Ne) and verification (Nv) phases. The analytic expressions are validated using the FRGC v2 and FVC2000 biometric databases

    Binary Biometrics: An Analytic Framework to Estimate the Performance Curves Under Gaussian Assumption

    Get PDF
    In recent years, the protection of biometric data has gained increased interest from the scientific community. Methods such as the fuzzy commitment scheme, helper-data system, fuzzy extractors, fuzzy vault, and cancelable biometrics have been proposed for protecting biometric data. Most of these methods use cryptographic primitives or error-correcting codes (ECCs) and use a binary representation of the real-valued biometric data. Hence, the difference between two biometric samples is given by the Hamming distance (HD) or bit errors between the binary vectors obtained from the enrollment and verification phases, respectively. If the HD is smaller (larger) than the decision threshold, then the subject is accepted (rejected) as genuine. Because of the use of ECCs, this decision threshold is limited to the maximum error-correcting capacity of the code, consequently limiting the false rejection rate (FRR) and false acceptance rate tradeoff. A method to improve the FRR consists of using multiple biometric samples in either the enrollment or verification phase. The noise is suppressed, hence reducing the number of bit errors and decreasing the HD. In practice, the number of samples is empirically chosen without fully considering its fundamental impact. In this paper, we present a Gaussian analytical framework for estimating the performance of a binary biometric system given the number of samples being used in the enrollment and the verification phase. The error-detection tradeoff curve that combines the false acceptance and false rejection rates is estimated to assess the system performance. The analytic expressions are validated using the Face Recognition Grand Challenge v2 and Fingerprint Verification Competition 2000 biometric databases

    Pitfall of the Detection Rate Optimized Bit Allocation within template protection and a remedy

    Get PDF
    One of the requirements of a biometric template protection system is that the protected template ideally should not leak any information about the biometric sample or its derivatives. In the literature, several proposed template protection techniques are based on binary vectors. Hence, they require the extraction of a binary representation from the real- valued biometric sample. In this work we focus on the Detection Rate Optimized Bit Allocation (DROBA) quantization scheme that extracts multiple bits per feature component while maximizing the overall detection rate. The allocation strategy has to be stored as auxiliary data for reuse in the verification phase and is considered as public. This implies that the auxiliary data should not leak any information about the extracted binary representation. Experiments in our work show that the original DROBA algorithm, as known in the literature, creates auxiliary data that leaks a significant amount of information. We show how an adversary is able to exploit this information and significantly increase its success rate on obtaining a false accept. Fortunately, the information leakage can be mitigated by restricting the allocation freedom of the DROBA algorithm. We propose a method based on population statistics and empirically illustrate its effectiveness. All the experiments are based on the MCYT fingerprint database using two different texture based feature extraction algorithms

    On the performance of helper data template protection schemes

    Get PDF
    The use of biometrics looks promising as it is already being applied in elec- tronic passports, ePassports, on a global scale. Because the biometric data has to be stored as a reference template on either a central or personal storage de- vice, its wide-spread use introduces new security and privacy risks such as (i) identity fraud, (ii) cross-matching, (iii) irrevocability and (iv) leaking sensitive medical information. Mitigating these risks is essential to obtain the accep- tance from the subjects of the biometric systems and therefore facilitating the successful implementation on a large-scale basis. A solution to mitigate these risks is to use template protection techniques. The required protection properties of the stored reference template according to ISO guidelines are (i) irreversibility, (ii) renewability and (iii) unlinkability. A known template protection scheme is the helper data system (HDS). The fun- damental principle of the HDS is to bind a key with the biometric sample with use of helper data and cryptography, as such that the key can be reproduced or released given another biometric sample of the same subject. The identity check is then performed in a secure way by comparing the hash of the key. Hence, the size of the key determines the amount of protection. This thesis extensively investigates the HDS system, namely (i) the the- oretical classication performance, (ii) the maximum key size, (iii) the irre- versibility and unlinkability properties, and (iv) the optimal multi-sample and multi-algorithm fusion method. The theoretical classication performance of the biometric system is deter- mined by assuming that the features extracted from the biometric sample are Gaussian distributed. With this assumption we investigate the in uence of the bit extraction scheme on the classication performance. With use of the the- oretical framework, the maximum size of the key is determined by assuming the error-correcting code to operate on Shannon's bound. We also show three vulnerabilities of HDS that aect the irreversibility and unlinkability property and propose solutions. Finally, we study the optimal level of applying multi- sample and multi-algorithm fusion with the HDS at either feature-, score-, or decision-level

    Facial Analysis: Looking at Biometric Recognition and Genome-Wide Association

    Get PDF

    Assessing the match performance of non-ideal operational facial images using 3D image data.

    Get PDF
    Biometric attributes are unique characteristics specific to an individual, which can be used in automated identification schemes. There have been considerable advancements in the field of face recognition recently, but challenges still exist. One of these challenges is pose-variation, specifically, roll, pitch, and yaw variations away from a frontal image. The goal of this problem report is to assess the improvement of facial recognition performance obtainable by commercial pose-correction software. This was done using pose-corrected images obtained in two ways: 1) non-frontal images generated and corrected using 3D facial scans (pseudo-pose-correction) and 2) the same non-frontal images corrected using FaceVACs DBScan. Two matchers were used to evaluate matching performance namely Cognitec FaceVACs and MegaMatcher 5.0 SDK. A set of matching experiments were conducted using frontal, non-frontal and pose-corrected images to assess the improvement in matching performance, including: 1. Frontal (probe) to Frontal (gallery) images, to generate the baseline 2. Non-ideal pose-varying (probe) to frontal (gallery) 3. Pseudo-pose-corrected (probe) to frontal (gallery) 4. Auto-pose-corrected (probe) to frontal (gallery). Cumulative match characteristics curves (CMC) are used to evaluate the performance of the match scores generated. These matching results have shown better performance in case of pseudo-pose-corrected images compared to the non-frontal images, where the rank accuracy is 100% for the angles which were not detected by the matchers in the non-frontal case. Of the two commercial matchers, Cognitec, which is software optimized for non-frontal models, has shown a better performance in detection of face with angular rotations. MegaMatcher, which is not a pose-correction matcher, was unable to detect greater angles of rotation which are 50° and 60° in pitch, greater than 40° for yaw and for coupled pitch/yaw it was unable to detect 4 out of 8 combinations. The requirements of the facial recognition application will influence the decision to implement pose correction tools

    Aging effects in automated face recognition

    Get PDF
    The main objective of this work was to analyze the effects of aging on the automated face recognition process. A dataset was used to perform experiments and obtain indicators to measure the impact of aging. To compare the effects of aging the dataset was segmented based on the age difference between the subjects’ face images. The image quality metrics were also part of the analysis performed in this study. The results of the experiments shown that the higher the gap between the images, the higher the error rates. These were the expected results and it is consistent with other experiments performed in the past. The False Rejection Rate (FRR) was measured at 1%, 0.1%, and 0.01% False Acceptance Rate (FAR) obtaining the similar output as the gap between the images increased

    GAIT RECOGNITION PROGRESS IN RECOGNIZING IMAGE CHARACTERISTICS

    Get PDF
    We present a humans credentials system centered on ambulation characteristics. This problem is as eminent as acoustic gait recognition. The objective of the scheme is to explore sounds radiated by walking persons (largely the musical note sounds) and identifies those folks. A cyclic model topology is engaged to denote individual gait cycles. This topology permits modeling and detecting individual steps, leading to very favorable identification rates
    corecore