199 research outputs found

    Predictive models for multibiometric systems

    Get PDF
    Recognizing a subject given a set of biometrics is a fundamental pattern recognition problem. This paper builds novel statistical models for multibiometric systems using geometric and multinomial distributions. These models are generic as they are only based on the similarity scores produced by a recognition system. They predict the bounds on the range of indices within which a test subject is likely to be present in a sorted set of similarity scores. These bounds are then used in the multibiometric recognition system to predict a smaller subset of subjects from the database as probable candidates for a given test subject. Experimental results show that the proposed models enhance the recognition rate beyond the underlying matching algorithms for multiple face views, fingerprints, palm prints, irises and their combinations

    Genetic And Evolutionary Biometrics:Multiobjective, Multimodal, Feature Selection/Weighting For Tightly Coupled Periocular And Face Recognition

    Get PDF
    The Genetic & Evolutionary Computation (GEC) research community has seen the emergence of a new subarea, referred to as Genetic & Evolutionary Biometrics (GEB), as GECs have been applied to solve a variety of biometric problems. In this dissertation, we present three new GEB techniques for multibiometric recognition: Genetic & Evolutionary Feature Selection (GEFeS), Weighting (GEFeW), and Weighting/Selection (GEFeWS). Instead of selecting the most salient individual features, these techniques evolve subsets of the most salient combinations of features and/or weight features based on their discriminative ability in an effort to increase accuracy while decreasing the overall number of features needed for recognition. We also incorporate cross validation into our best performing technique in an attempt to evolve feature masks (FMs) that also generalize well to unseen subjects and we search the value preference space in an attempt to analyze its impact in respect to optimization and generalization. Our results show that by fusing the periocular biometric with the face, we can achieve higher recognition accuracies than using the two biometric modalities independently. Our results also show that our GEB techniques are able to achieve higher recognition rates than the baseline methods, while using significantly fewer features. In addition, by incorporating machine learning, we were able to create FMs that also generalize well to unseen subjects and use less than 50% of the extracted features. Finally, by searching the value preference space, we were able to determine which weights were most effective in terms of optimization and generalization

    Genetic Programming for Multibiometrics

    Full text link
    Biometric systems suffer from some drawbacks: a biometric system can provide in general good performances except with some individuals as its performance depends highly on the quality of the capture. One solution to solve some of these problems is to use multibiometrics where different biometric systems are combined together (multiple captures of the same biometric modality, multiple feature extraction algorithms, multiple biometric modalities...). In this paper, we are interested in score level fusion functions application (i.e., we use a multibiometric authentication scheme which accept or deny the claimant for using an application). In the state of the art, the weighted sum of scores (which is a linear classifier) and the use of an SVM (which is a non linear classifier) provided by different biometric systems provide one of the best performances. We present a new method based on the use of genetic programming giving similar or better performances (depending on the complexity of the database). We derive a score fusion function by assembling some classical primitives functions (+, *, -, ...). We have validated the proposed method on three significant biometric benchmark datasets from the state of the art

    Applying the Upper Integral to the Biometric Score Fusion Problem in the Identification Model

    Get PDF
    This paper presents a new biometric score fusion approach in an identification system using the upper integral with respect to Sugeno's fuzzy measure. First, the proposed method considers each individual matcher as a fuzzy set in order to handle uncertainty and imperfection in matching scores. Then, the corresponding fuzzy entropy estimates the reliability of the information provided by each biometric matcher. Next, the fuzzy densities are generated based on rank information and training accuracy. Finally, the results are aggregated using the upper fuzzy integral. Experimental results compared with other fusion methods demonstrate the good performance of the proposed approach

    A Survey on Ear Biometrics

    No full text
    Recognizing people by their ear has recently received significant attention in the literature. Several reasons account for this trend: first, ear recognition does not suffer from some problems associated with other non contact biometrics, such as face recognition; second, it is the most promising candidate for combination with the face in the context of multi-pose face recognition; and third, the ear can be used for human recognition in surveillance videos where the face may be occluded completely or in part. Further, the ear appears to degrade little with age. Even though, current ear detection and recognition systems have reached a certain level of maturity, their success is limited to controlled indoor conditions. In addition to variation in illumination, other open research problems include hair occlusion; earprint forensics; ear symmetry; ear classification; and ear individuality. This paper provides a detailed survey of research conducted in ear detection and recognition. It provides an up-to-date review of the existing literature revealing the current state-of-art for not only those who are working in this area but also for those who might exploit this new approach. Furthermore, it offers insights into some unsolved ear recognition problems as well as ear databases available for researchers

    Feature Level Fusion of Face and Fingerprint Biometrics

    Full text link
    The aim of this paper is to study the fusion at feature extraction level for face and fingerprint biometrics. The proposed approach is based on the fusion of the two traits by extracting independent feature pointsets from the two modalities, and making the two pointsets compatible for concatenation. Moreover, to handle the problem of curse of dimensionality, the feature pointsets are properly reduced in dimension. Different feature reduction techniques are implemented, prior and after the feature pointsets fusion, and the results are duly recorded. The fused feature pointset for the database and the query face and fingerprint images are matched using techniques based on either the point pattern matching, or the Delaunay triangulation. Comparative experiments are conducted on chimeric and real databases, to assess the actual advantage of the fusion performed at the feature extraction level, in comparison to the matching score level.Comment: 6 pages, 7 figures, conferenc

    Identifying Humans by the Shape of Their Heartbeats and Materials by Their X-Ray Scattering Profiles

    Get PDF
    Security needs at access control points presents itself in the form of human identification and/or material identification. The field of Biometrics deals with the problem of identifying individuals based on the signal measured from them. One approach to material identification involves matching their x-ray scattering profiles with a database of known materials. Classical biometric traits such as fingerprints, facial images, speech, iris and retinal scans are plagued by potential circumvention they could be copied and later used by an impostor. To address this problem, other bodily traits such as the electrical signal acquired from the brain (electroencephalogram) or the heart (electrocardiogram) and the mechanical signals acquired from the heart (heart sound, laser Doppler vibrometry measures of the carotid pulse) have been investigated. These signals depend on the physiology of the body, and require the individual to be alive and present during acquisition, potentially overcoming circumvention. We investigate the use of the electrocardiogram (ECG) and carotid laser Doppler vibrometry (LDV) signal, both individually and in unison, for biometric identity recognition. A parametric modeling approach to system design is employed, where the system parameters are estimated from training data. The estimated model is then validated using testing data. A typical identity recognition system can operate in either the authentication (verification) or identification mode. The performance of the biometric identity recognition systems is evaluated using receiver operating characteristic (ROC) or detection error tradeoff (DET) curves, in the authentication mode, and cumulative match characteristic (CMC) curves, in the identification mode. The performance of the ECG- and LDV-based identity recognition systems is comparable, but is worse than those of classical biometric systems. Authentication performance below 1% equal error rate (EER) can be attained when the training and testing data are obtained from a single measurement session. When the training and testing data are obtained from different measurement sessions, allowing for a potential short-term or long-term change in the physiology, the authentication EER performance degrades to about 6 to 7%. Leveraging both the electrical (ECG) and mechanical (LDV) aspects of the heart, we obtain a performance gain of over 50%, relative to each individual ECG-based or LDV-based identity recognition system, bringing us closer to the performance of classical biometrics, with the added advantage of anti-circumvention. We consider the problem of designing combined x-ray attenuation and scatter systems and the algorithms to reconstruct images from the systems. As is the case within a computational imaging framework, we tackle the problem by taking a joint system and algorithm design approach. Accurate modeling of the attenuation of incident and scattered photons within a scatter imaging setup will ultimately lead to more accurate estimates of the scatter densities of an illuminated object. Such scattering densities can then be used in material classification. In x-ray scatter imaging, tomographic measurements of the forward scatter distribution are used to infer scatter densities within a volume. A mask placed between the object and the detector array provides information about scatter angles. An efficient computational implementation of the forward and backward model facilitates iterative algorithms based upon a Poisson log-likelihood. The design of the scatter imaging system influences the algorithmic choices we make. In turn, the need for efficient algorithms guides the system design. We begin by analyzing an x-ray scatter system fitted with a fanbeam source distribution and flat-panel energy-integrating detectors. Efficient algorithms for reconstructing object scatter densities from scatter measurements made on this system are developed. Building on the fanbeam source, energy-integrating at-panel detection model, we develop a pencil beam model and an energy-sensitive detection model. The scatter forward models and reconstruction algorithms are validated on simulated, Monte Carlo, and real data. We describe a prototype x-ray attenuation scanner, co-registered with the scatter system, which was built to provide complementary attenuation information to the scatter reconstruction and present results of applying alternating minimization reconstruction algorithms on measurements from the scanner
    corecore