17,943 research outputs found
Optimal decision fusion and its application on 3D face recognition
Fusion is a popular practice to combine multiple classifiers or multiple modalities in biometrics. In this paper, optimal decision fusion (ODF) by AND rule and OR rule is presented. We show that the decision fusion can be done in an optimal way such that it always gives an improvement in terms of error rates over the classifiers that are fused. Both the optimal decision fusion theory and the experimental results on the FRGC 2D and 3D face data are given. Experiments show that the optimal decision fusion effectively combines the 2D texture and 3D shape information, and boosts the performance of the system
Recommended from our members
Multimodal biometrics score level fusion using non-confidence information
Multimodal biometrics refers to automatic authentication methods that depend on multiple modalities of measurable physical characteristics. It alleviates most of the restrictions of single biometrics. To combine the multimodal biometrics scores, three different categories of fusion approaches including rule based, classification based and density based approaches are available. When choosing an approach, one has to consider not only the fusion performance, but also system requirements and other circumstances. In the context of verification, classification errors arise from samples in the overlapping region (or non- confidence region) between genuine users and impostors. In score space, a further separation of the samples outside the non-confidence region does not result in further verification improvements. Therefore, information contained in the non-confidence region might be useful for improving the fusion process. Up to this point, no attempts are reported in the literature that tries to enhance the fusion process using this additional information. In this work, the use of this information is explored in rule based and density based approaches mentioned above
Machine Learning for Biometrics
Biometrics aims at reliable and robust identification of humans from their personal traits, mainly for security and authentication purposes, but also for identifying and tracking the users of smarter applications. Frequently considered modalities are fingerprint, face, iris, palmprint and voice, but there are many other possible biometrics, including gait, ear image, retina, DNA, and even behaviours. This chapter presents a survey of machine learning methods used for biometrics applications, and identifies relevant research issues. We focus on three areas of interest: offline methods for biometric template construction and recognition, information fusion methods for integrating multiple biometrics to obtain robust results, and methods for dealing with temporal information. By introducing exemplary and influential machine learning approaches in the context of specific biometrics applications, we hope to provide the reader with the means to create novel machine learning solutions to challenging biometrics problems
Robust multi-modal and multi-unit feature level fusion of face and iris biometrics
Multi-biometrics has recently emerged as a mean of more robust and effcient
personal verification and identification. Exploiting information from multiple
sources at various levels i.e., feature, score, rank or decision, the false acceptance
and rejection rates can be considerably reduced. Among all, feature level fusion
is relatively an understudied problem. This paper addresses the feature level
fusion for multi-modal and multi-unit sources of information. For multi-modal
fusion the face and iris biometric traits are considered, while the multi-unit fusion
is applied to merge the data from the left and right iris images. The proposed
approach computes the SIFT features from both biometric sources, either multi-
modal or multi-unit. For each source, the extracted SIFT features are selected via
spatial sampling. Then these selected features are finally concatenated together
into a single feature super-vector using serial fusion. This concatenated feature
vector is used to perform classification.
Experimental results from face and iris standard biometric databases are
presented. The reported results clearly show the performance improvements in
classification obtained by applying feature level fusion for both multi-modal and
multi-unit biometrics in comparison to uni-modal classification and score level
fusion
Multi-Sample Fusion with Template Protection
Abstract: The widespread use of biometrics and its increased popularity introduces privacy risks. In order to mitigate these risks, solutions such as the helper-data system, fuzzy vault, fuzzy extractors, and cancelable biometrics were introduced, also known as the field of template protection. Besides these developments, fusion of multiple sources of biometric information have shown to improve the verification performance of the biometric system. Our work consists of analyzing feature-level fusion in the context of the template protection framework using the helper-data system. We verify the results using the FRGC v2 database and two feature extraction algorithms.
Complementary Feature Level Data Fusion for Biometric Authentication Using Neural Networks
Data fusion as a formal research area is referred to as multiâsensor data fusion. The premise is that combined data from multiple sources can provide more meaningful, accurate and reliable information than that provided by data from a single source. There are many application areas in military and security as well as civilian domains. Multiâsensor data fusion as applied to biometric authentication is termed multiâmodal biometrics. Though based on similar premises, and having many similarities to formal data fusion, multiâmodal biometrics has some differences in relation to data fusion levels. The objective of the current study was to apply feature level fusion of fingerprint feature and keystroke dynamics data for authentication purposes, utilizing Artificial Neural Networks (ANNs) as a classifier. Data fusion was performed adopting the complementary paradigm, which utilized all processed data from both sources. Experimental results returned a false acceptance rate (FAR) of 0.0 and a worst case false rejection rate (FRR) of 0.0004. This shows a worst case performance that is at least as good as most other research in the field. The experimental results also demonstrated that data fusion gave a better outcome than either fingerprint or keystroke dynamics alone
A Survey on Soft Biometrics for Human Identification
The focus has been changed to multi-biometrics due to the security demands. The ancillary information extracted from primary biometric (face and body) traits such as facial measurements, gender, color of the skin, ethnicity, and height is called soft biometrics and can be integrated to improve the speed and overall system performance of a primary biometric system (e.g., fuse face with facial marks) or to generate human semantic interpretation description (qualitative) of a person and limit the search in the whole dataset when using gender and ethnicity (e.g., old African male with blue eyes) in a fusion framework. This chapter provides a holistic survey on soft biometrics that show major works while focusing on facial soft biometrics and discusses some of the features of extraction and classification techniques that have been proposed and show their strengths and limitations
A novel approach of gait recognition through fusion with footstep information
Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works. R. Vera-RodrĂguez, J. FiĂ©rrez, J. S.D. Mason, J. Ortega-GarcĂa, "A novel approach of gait recognition through fusion with footstep information" in International Conference on Biometrics (ICB), Madrid (Spain), 2013, 1-6This paper is focused on two biometric modes which are very linked together: gait and footstep biometrics. Footstep recognition is a relatively new biometric based on signals extracted from floor sensors, while gait has been more researched and it is based on video sequences of people walking. This paper reports a directly comparative assessment of both biometrics using the same database (SFootBD) and experimental protocols. A fusion of the two modes leads to an enhanced gait recognition performance, as the information from both modes comes from different capturing devices and is not very correlated. This fusion could find application in indoor scenarios where a gait recognition system is present, such as in security access (e.g. security gate at airports) or smart homes. Gait and footstep systems achieve results of 8.4% and 10.7% EER respectively, which can be significantly improved to 4.8% EER with their fusion at the score level into a walking biometric.This work has been partially supported by projects Bio-Shield (TEC2012-34881), Contexts (S2009/TIC-1485), TeraSense (CSD2008-00068) and âCĂĄtedra UAM-TelefĂłnicaâ
Polar Fusion Technique Analysis for Evaluating the Performances of Image Fusion of Thermal and Visual Images for Human Face Recognition
This paper presents a comparative study of two different methods, which are
based on fusion and polar transformation of visual and thermal images. Here,
investigation is done to handle the challenges of face recognition, which
include pose variations, changes in facial expression, partial occlusions,
variations in illumination, rotation through different angles, change in scale
etc. To overcome these obstacles we have implemented and thoroughly examined
two different fusion techniques through rigorous experimentation. In the first
method log-polar transformation is applied to the fused images obtained after
fusion of visual and thermal images whereas in second method fusion is applied
on log-polar transformed individual visual and thermal images. After this step,
which is thus obtained in one form or another, Principal Component Analysis
(PCA) is applied to reduce dimension of the fused images. Log-polar transformed
images are capable of handling complicacies introduced by scaling and rotation.
The main objective of employing fusion is to produce a fused image that
provides more detailed and reliable information, which is capable to overcome
the drawbacks present in the individual visual and thermal face images.
Finally, those reduced fused images are classified using a multilayer
perceptron neural network. The database used for the experiments conducted here
is Object Tracking and Classification Beyond Visible Spectrum (OTCBVS) database
benchmark thermal and visual face images. The second method has shown better
performance, which is 95.71% (maximum) and on an average 93.81% as correct
recognition rate.Comment: Proceedings of IEEE Workshop on Computational Intelligence in
Biometrics and Identity Management (IEEE CIBIM 2011), Paris, France, April 11
- 15, 201
- âŠ