17,943 research outputs found

    Optimal decision fusion and its application on 3D face recognition

    Get PDF
    Fusion is a popular practice to combine multiple classifiers or multiple modalities in biometrics. In this paper, optimal decision fusion (ODF) by AND rule and OR rule is presented. We show that the decision fusion can be done in an optimal way such that it always gives an improvement in terms of error rates over the classifiers that are fused. Both the optimal decision fusion theory and the experimental results on the FRGC 2D and 3D face data are given. Experiments show that the optimal decision fusion effectively combines the 2D texture and 3D shape information, and boosts the performance of the system

    Machine Learning for Biometrics

    Get PDF
    Biometrics aims at reliable and robust identification of humans from their personal traits, mainly for security and authentication purposes, but also for identifying and tracking the users of smarter applications. Frequently considered modalities are fingerprint, face, iris, palmprint and voice, but there are many other possible biometrics, including gait, ear image, retina, DNA, and even behaviours. This chapter presents a survey of machine learning methods used for biometrics applications, and identifies relevant research issues. We focus on three areas of interest: offline methods for biometric template construction and recognition, information fusion methods for integrating multiple biometrics to obtain robust results, and methods for dealing with temporal information. By introducing exemplary and influential machine learning approaches in the context of specific biometrics applications, we hope to provide the reader with the means to create novel machine learning solutions to challenging biometrics problems

    Robust multi-modal and multi-unit feature level fusion of face and iris biometrics

    Get PDF
    Multi-biometrics has recently emerged as a mean of more robust and effcient personal verification and identification. Exploiting information from multiple sources at various levels i.e., feature, score, rank or decision, the false acceptance and rejection rates can be considerably reduced. Among all, feature level fusion is relatively an understudied problem. This paper addresses the feature level fusion for multi-modal and multi-unit sources of information. For multi-modal fusion the face and iris biometric traits are considered, while the multi-unit fusion is applied to merge the data from the left and right iris images. The proposed approach computes the SIFT features from both biometric sources, either multi- modal or multi-unit. For each source, the extracted SIFT features are selected via spatial sampling. Then these selected features are finally concatenated together into a single feature super-vector using serial fusion. This concatenated feature vector is used to perform classification. Experimental results from face and iris standard biometric databases are presented. The reported results clearly show the performance improvements in classification obtained by applying feature level fusion for both multi-modal and multi-unit biometrics in comparison to uni-modal classification and score level fusion

    Multi-Sample Fusion with Template Protection

    Get PDF
    Abstract: The widespread use of biometrics and its increased popularity introduces privacy risks. In order to mitigate these risks, solutions such as the helper-data system, fuzzy vault, fuzzy extractors, and cancelable biometrics were introduced, also known as the field of template protection. Besides these developments, fusion of multiple sources of biometric information have shown to improve the verification performance of the biometric system. Our work consists of analyzing feature-level fusion in the context of the template protection framework using the helper-data system. We verify the results using the FRGC v2 database and two feature extraction algorithms.

    Complementary Feature Level Data Fusion for Biometric Authentication Using Neural Networks

    Get PDF
    Data fusion as a formal research area is referred to as multi‐sensor data fusion. The premise is that combined data from multiple sources can provide more meaningful, accurate and reliable information than that provided by data from a single source. There are many application areas in military and security as well as civilian domains. Multi‐sensor data fusion as applied to biometric authentication is termed multi‐modal biometrics. Though based on similar premises, and having many similarities to formal data fusion, multi‐modal biometrics has some differences in relation to data fusion levels. The objective of the current study was to apply feature level fusion of fingerprint feature and keystroke dynamics data for authentication purposes, utilizing Artificial Neural Networks (ANNs) as a classifier. Data fusion was performed adopting the complementary paradigm, which utilized all processed data from both sources. Experimental results returned a false acceptance rate (FAR) of 0.0 and a worst case false rejection rate (FRR) of 0.0004. This shows a worst case performance that is at least as good as most other research in the field. The experimental results also demonstrated that data fusion gave a better outcome than either fingerprint or keystroke dynamics alone

    A Survey on Soft Biometrics for Human Identification

    Get PDF
    The focus has been changed to multi-biometrics due to the security demands. The ancillary information extracted from primary biometric (face and body) traits such as facial measurements, gender, color of the skin, ethnicity, and height is called soft biometrics and can be integrated to improve the speed and overall system performance of a primary biometric system (e.g., fuse face with facial marks) or to generate human semantic interpretation description (qualitative) of a person and limit the search in the whole dataset when using gender and ethnicity (e.g., old African male with blue eyes) in a fusion framework. This chapter provides a holistic survey on soft biometrics that show major works while focusing on facial soft biometrics and discusses some of the features of extraction and classification techniques that have been proposed and show their strengths and limitations

    A novel approach of gait recognition through fusion with footstep information

    Full text link
    Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works. R. Vera-RodrĂ­guez, J. FiĂ©rrez, J. S.D. Mason, J. Ortega-GarcĂ­a, "A novel approach of gait recognition through fusion with footstep information" in International Conference on Biometrics (ICB), Madrid (Spain), 2013, 1-6This paper is focused on two biometric modes which are very linked together: gait and footstep biometrics. Footstep recognition is a relatively new biometric based on signals extracted from floor sensors, while gait has been more researched and it is based on video sequences of people walking. This paper reports a directly comparative assessment of both biometrics using the same database (SFootBD) and experimental protocols. A fusion of the two modes leads to an enhanced gait recognition performance, as the information from both modes comes from different capturing devices and is not very correlated. This fusion could find application in indoor scenarios where a gait recognition system is present, such as in security access (e.g. security gate at airports) or smart homes. Gait and footstep systems achieve results of 8.4% and 10.7% EER respectively, which can be significantly improved to 4.8% EER with their fusion at the score level into a walking biometric.This work has been partially supported by projects Bio-Shield (TEC2012-34881), Contexts (S2009/TIC-1485), TeraSense (CSD2008-00068) and “CĂĄtedra UAM-TelefĂłnica”

    Polar Fusion Technique Analysis for Evaluating the Performances of Image Fusion of Thermal and Visual Images for Human Face Recognition

    Full text link
    This paper presents a comparative study of two different methods, which are based on fusion and polar transformation of visual and thermal images. Here, investigation is done to handle the challenges of face recognition, which include pose variations, changes in facial expression, partial occlusions, variations in illumination, rotation through different angles, change in scale etc. To overcome these obstacles we have implemented and thoroughly examined two different fusion techniques through rigorous experimentation. In the first method log-polar transformation is applied to the fused images obtained after fusion of visual and thermal images whereas in second method fusion is applied on log-polar transformed individual visual and thermal images. After this step, which is thus obtained in one form or another, Principal Component Analysis (PCA) is applied to reduce dimension of the fused images. Log-polar transformed images are capable of handling complicacies introduced by scaling and rotation. The main objective of employing fusion is to produce a fused image that provides more detailed and reliable information, which is capable to overcome the drawbacks present in the individual visual and thermal face images. Finally, those reduced fused images are classified using a multilayer perceptron neural network. The database used for the experiments conducted here is Object Tracking and Classification Beyond Visible Spectrum (OTCBVS) database benchmark thermal and visual face images. The second method has shown better performance, which is 95.71% (maximum) and on an average 93.81% as correct recognition rate.Comment: Proceedings of IEEE Workshop on Computational Intelligence in Biometrics and Identity Management (IEEE CIBIM 2011), Paris, France, April 11 - 15, 201
    • 

    corecore