402 research outputs found
Generic multimodal biometric fusion
Biometric systems utilize physiological or behavioral traits to automatically identify individuals. A unimodal biometric system utilizes only one source of biometric information and suffers from a variety of problems such as noisy data, intra-class variations, restricted degrees of freedom, non-universality, spoof attacks and unacceptable error rates. Multimodal biometrics refers to a system which utilizes multiple biometric information sources and can overcome some of the limitation of unimodal system. Biometric information can be combined at 4 different levels: (i) Raw data level; (ii) Feature level; (iii) Match-score level; and (iv) Decision level. Match score fusion and decision fusion have received significant attention due to convenient information representation and raw data fusion is extremely challenging due to large diversity of representation. Feature level fusion provides a good trade-off between fusion complexity and loss of information due to subsequent processing. This work presents generic feature information fusion techniques for fusion of most of the commonly used feature representation schemes. A novel concept of Local Distance Kernels is introduced to transform the available information into an arbitrary common distance space where they can be easily fused together. Also, a new dynamic learnable noise removal scheme based on thresholding is used to remove shot noise in the distance vectors. Finally we propose the use of AdaBoost and Support Vector Machines for learning the fusion rules to obtain highly reliable final matching scores from the transformed local distance vectors. The integration of the proposed methods leads to large performance improvement over match-score or decision level fusion
Multiple classifiers in biometrics. part 1: Fundamentals and review
We provide an introduction to Multiple Classifier Systems (MCS) including basic nomenclature and describing key elements: classifier dependencies, type of classifier outputs, aggregation procedures, architecture, and types of methods. This introduction complements other existing overviews of MCS, as here we also review the most prevalent theoretical framework for MCS and discuss theoretical developments related to MCS
The introduction to MCS is then followed by a review of the application of MCS to the particular field of multimodal biometric person authentication in the last 25 years, as a prototypical area in which MCS has resulted in important achievements. This review includes general descriptions of successful MCS methods and architectures in order to facilitate the export of them to other information fusion problems.
Based on the theory and framework introduced here, in the companion paper we then develop in more technical detail recent trends and developments in MCS from multimodal biometrics that incorporate context information in an adaptive way. These new MCS architectures exploit input quality measures and pattern-specific particularities that move apart from general population statistics, resulting in robust multimodal biometric systems. Similarly as in the present paper, methods in the companion paper are introduced in a general way so they can be applied to other information fusion problems as well. Finally, also in the companion paper, we discuss open challenges in biometrics and the role of MCS to advance themThis work was funded by projects CogniMetrics (TEC2015-70627-R)
from MINECO/FEDER and RiskTrakc (JUST-2015-JCOO-AG-1). Part of thisthis work was conducted during a research visit of J.F. to Prof. Ludmila Kuncheva at Bangor University (UK) with STSM funding from COST CA16101 (MULTI-FORESEE
Recommended from our members
Evaluation and analysis of hybrid intelligent pattern recognition techniques for speaker identification
This thesis was submitted for the degree of Doctor of Philosophy and awarded by Brunel University.The rapid momentum of the technology progress in the recent years has led to a tremendous rise in the use of biometric authentication systems. The objective of this research is to investigate the problem
of identifying a speaker from its voice regardless of the content (i.e.
text-independent), and to design efficient methods of combining face and voice in producing a robust authentication system.
A novel approach towards speaker identification is developed using
wavelet analysis, and multiple neural networks including Probabilistic
Neural Network (PNN), General Regressive Neural Network (GRNN)and Radial Basis Function-Neural Network (RBF NN) with the AND
voting scheme. This approach is tested on GRID and VidTIMIT cor-pora and comprehensive test results have been validated with state-
of-the-art approaches. The system was found to be competitive and it improved the recognition rate by 15% as compared to the classical Mel-frequency Cepstral Coe±cients (MFCC), and reduced the recognition time by 40% compared to Back Propagation Neural Network (BPNN), Gaussian Mixture Models (GMM) and Principal Component Analysis (PCA).
Another novel approach using vowel formant analysis is implemented using Linear Discriminant Analysis (LDA). Vowel formant based speaker identification is best suitable for real-time implementation and requires only a few bytes of information to be stored for each speaker, making it both storage and time efficient. Tested on GRID and Vid-TIMIT, the proposed scheme was found to be 85.05% accurate when Linear Predictive Coding (LPC) is used to extract the vowel formants, which is much higher than the accuracy of BPNN and GMM. Since the proposed scheme does not require any training time other than creating a small database of vowel formants, it is faster as well. Furthermore, an increasing number of speakers makes it di±cult for BPNN and GMM to sustain their accuracy, but the proposed score-based methodology stays almost linear.
Finally, a novel audio-visual fusion based identification system is implemented using GMM and MFCC for speaker identiÂŻcation and PCA for face recognition. The results of speaker identification and face recognition are fused at different levels, namely the feature, score and decision levels. Both the score-level and decision-level (with OR voting) fusions were shown to outperform the feature-level fusion in terms of accuracy and error resilience. The result is in line with the distinct nature of the two modalities which lose themselves when combined at the feature-level. The GRID and VidTIMIT test results validate that
the proposed scheme is one of the best candidates for the fusion of
face and voice due to its low computational time and high recognition accuracy
Recommended from our members
One-class Classification: An Approach to Handle Class Imbalance in Multimodal Biometric Authentication
Biometric verification is the process of authenticating a person‟s identity using his/her physiological and behavioural characteristics. It is well-known that multimodal biometric systems can further improve the authentication accuracy by combining information from multiple biometric traits at various levels, namely sensor, feature, match score and decision levels. Fusion at match score level is generally preferred due to the trade-off between information availability and fusion complexity. However, combining match scores poses a number of challenges, when treated as a two-class classification problem due to the highly imbalanced class distributions. Most conventional classifiers assume equally balanced classes. They do not work well when samples of one class vastly outnumber the samples of the other class. These challenges become even more significant, when the fusion is based on user-specific processing due to the limited availability of the genuine samples per user. This thesis aims at exploring the paradigm of one-class classification to advance the classification performance of imbalanced biometric data sets. The contributions of the research can be enumerated as follows.
Firstly, a thorough investigation of the various one-class classifiers, including Gaussian Mixture Model, k-Nearest Neighbour, K-means clustering and Support Vector Data Description, has been provided. These classifiers are applied in learning the user-specific and user-independent descriptions for the biometric decision inference. It is demonstrated that the one-class classifiers are particularly useful in handling the imbalanced learning problem in multimodal biometric authentication. User-specific approach is a better alternative with respect to user-independent counterpart because it is able to overcome the so-called within-class sub-concepts problem, which arises very often in multimodal biometric systems due to the existence of user variation.
Secondly, a novel adapted score fusion scheme that consists of one-class classifiers and is trained using both the genuine user and impostor samples has been proposed. This method also replaces user-independent by user-specific description to learn the characteristics of the impostor class, and thus, reducing the degree of imbalanced proportion of data for different classes. Extensive experiments are conducted on the BioSecure DS2 and XM2VTS databases to illustrate the potential of the proposed adapted score fusion scheme, which provides a relative improvement in terms of Equal Error Rate of 32% and 20% as compared to the standard sum of scores and likelihood ratio based score fusion, respectively.
Thirdly, a hybrid boosting algorithm, called r-ABOC has been developed, which is capable of exploiting the natural capabilities of both the well-known Real AdaBoost and one-class classification to further improve the system performance without causing overfitting. However, unlike the conventional Real AdaBoost, the individual classifiers in the proposed schema are trained on the same data set, but with different parameter choices. This does not only generate a high diversity, which is vital to the success of r-ABOC, but also reduces the number of user-specified parameters. A comprehensive empirical study using the BioSecure DS2 and XM2VTS databases demonstrates that r-ABOC may achieve a performance gain in terms of Half Total Error Rate of up to 28% with respect to other state-of-the-art biometric score fusion techniques.
Finally, a Robust Imputation based on Group Method of Data Handling (RIBG) has been proposed to handle the missing data problem in the BioSecure DS2 database. RIBG is able to provide accurate predictions of incomplete score vectors. It is observed to achieve a better performance with respect to the state-of-the-art imputation techniques, including mean, median and k-NN imputations. An important feature of RIBG is that it does not require any parameter fine-tuning, and hence, is amendable to immediate applications
An intelligent multimodal biometric authentication model for personalised healthcare services
With the advent of modern technologies, the healthcare industry is moving towards a more personalised smart care model. The enablers of such care models are the Internet of Things (IoT) and Artificial Intelligence (AI). These technologies collect and analyse data from persons in care to alert relevant parties if any anomaly is detected in a patient’s regular pattern. However, such reliance on IoT devices to capture continuous data extends the attack surfaces and demands high-security measures. Both patients and devices need to be authenticated to mitigate a large number of attack vectors. The biometric authentication method has been seen as a promising technique in these scenarios. To this end, this paper proposes an AI-based multimodal biometric authentication model for single and group-based users’ device-level authentication that increases protection against the traditional single modal approach. To test the efficacy of the proposed model, a series of AI models are trained and tested using physiological biometric features such as ECG (Electrocardiogram) and PPG (Photoplethysmography) signals from five public datasets available in Physionet and Mendeley data repositories. The multimodal fusion authentication model shows promising results with 99.8% accuracy and an Equal Error Rate (EER) of 0.16
Deep multimodal biometric recognition using contourlet derivative weighted rank fusion with human face, fingerprint and iris images
The goal of multimodal biometric recognition system is to make a decision by identifying their physiological behavioural traits. Nevertheless, the decision-making process by biometric recognition system can be extremely complex due to high dimension unimodal features in temporal domain. This paper explains a deep multimodal biometric system for human recognition using three traits, face, fingerprint and iris. With the objective of reducing the feature vector dimension in the temporal domain, first pre-processing is performed using Contourlet Transform Model. Next, Local Derivative Ternary Pattern model is applied to the pre-processed features where the feature discrimination power is improved by obtaining the coefficients that has maximum variation across pre-processed multimodality features, therefore improving recognition accuracy. Weighted Rank Level Fusion is applied to the extracted multimodal features, that efficiently combine the biometric matching scores from several modalities (i.e. face, fingerprint and iris). Finally, a deep learning framework is presented for improving the recognition rate of the multimodal biometric system in temporal domain. The results of the proposed multimodal biometric recognition framework were compared with other multimodal methods. Out of these comparisons, the multimodal face, fingerprint and iris fusion offers significant improvements in the recognition rate of the suggested multimodal biometric system
- …