232 research outputs found

    An Investigation of F-ratio Client-Dependent Normalisation on Biometric Authentication Tasks

    Get PDF
    This study investigates a new \emph{client-dependent normalisation} to improve biometric authentication systems. There exists many client-de-pendent score normalisation techniques applied to speaker authentication, such as Z-Norm, D-Norm and T-Norm. Such normalisation is intended to adjust the variation across different client models. We propose ``F-ratio'' normalisation, or F-Norm, applied to face and speaker authentication systems. This normalisation requires only that \emph{as few as} two client-dependent accesses are available (the more the better). Different from previous normalisation techniques, F-Norm considers the client and impostor distributions \emph{simultaneously}. We show that F-ratio is a natural choice because it is directly associated to Equal Error Rate. It has the effect of centering the client and impostor distributions such that a global threshold can be easily found. Another difference is that F-Norm actually ``interpolates'' between client-independent and client-dependent information by introducing a mixture parameter. This parameter \emph{can be optimised} to maximise the class dispersion (the degree of separability between client and impostor distributions) while the aforementioned normalisation techniques cannot. unimodal experiments XM2VTS multimodal database show that such normalisation is advantageous over Z-Norm, client-dependent threshold normalisation or no normalisation

    A Novel Approach to Combining Client-Dependent and Confidence Information in Multimodal Biometric

    Get PDF
    The issues of fusion with client-dependent and confidence information have been well studied separately in biometric authentication. In this study, we propose to take advantage of both sources of information in a discriminative framework. Initially, each source of information is processed on a per expert basis (plus on a per client basis for the first information and on a per example basis for the second information). Then, both sources of information are combined using a second-level classifier, across different experts. Although the formulation of such two-step solution is not new, the novelty lies in the way the sources of prior knowledge are incorporated prior to fusion using the second-level classifier. Because these two sources of information are of very different nature, one often needs to devise special algorithms to combine both information sources. Our framework that we call ``Prior Knowledge Incorporation'' has the advantage of using the standard machine learning algorithms. Based on 10×32=32010 \times 32=320 intramodal and multimodal fusion experiments carried out on the publicly available XM2VTS score-level fusion benchmark database, it is found that the generalisation performance of combining both information sources improves over using either or none of them, thus achieving a new state-of-the-art performance on this database

    Improving Single Modal and Multimodal Biometric Authentication Using F-ratio Client-Dependent Normalisation

    Get PDF
    This study investigates a new client-dependent normalisation to improve a single biometric authentication system, as well as its effects on fusion. There exists two families of client-dependent normalisation techniques, often applied to speaker authentication. They are client-dependent score and threshold normalisation techniques. Examples of the former family of techniques are Z-Norm, D-Norm and T-Norm. There is also a vast amount of literature on the latter family of techniques. Both families are surveyed in this study. Furthermore, we also provide a link between these two families of techniques and show that one is a dual representation of the other. These techniques are intended to adjust the variation across different client models. We propose ``F-ratio'' normalisation, or F-Norm, applied to face and speaker authentication systems in two contexts: single modal and fusion of multi-modal biometerics. This normalisation requires that only as few as two client-dependent accesses are available (the more the better). Different from previous normalisation techniques, F-Norm considers the client and impostor distributions simultaneously. We show that F-ratio is a natural choice because it is directly associated to Equal Error Rate. It has the effect of centering the client and impostor distributions such that a global threshold can be easily found. Another difference is that F-Norm actually ``interpolates'' between client-independent and client-dependent information by introducing two mixture parameters. These parameters can be optimised to maximise the class dispersion (the degree of separability between client and impostor distributions) while the aforementioned normalisation techniques cannot. The results of 13 single modal experiments and 32 fusion experiments carried out on the XM2VTS multimodal database show that in both contexts, F-Norm is advantageous over Z-Norm, client-dependent score normalisation with EER and no normalisation

    How Do Correlation and Variance of Base-Experts Affect Fusion in Biometric Authentication Tasks?

    Get PDF
    Combining multiple information sources such as subbands, streams (with different features) and multi modal data has shown to be a very promising trend, both in experiments and to some extend in real-life biometric authentication applications. Despite considerable efforts in fusions, there is a lack of understanding on the roles and effects of correlation and variance (of both the client and impostor scores of base-classifiers/experts). Often, scores are assumed to be independent. In this paper, we explicitly consider this factor using a theoretical model, called Variance Reduction-Equal Error Rate (VR-EER) analysis. Assuming that client and impostor scores are approximately Gaussian distributed, we showed that Equal Error Rate (EER) can be modeled as a function of F-ratio, which itself is a function of 1) correlation, 2) variance of base-experts and 3) difference of client and impostor means. To achieve lower EER, smaller correlation and average variance of base-experts, and larger mean difference are desirable. Furthermore, analysing any of these factors independently, e.g. focusing on correlation alone, could be miss-leading. Experimental results on the BANCA and XM2VTS multi-modal databases and NIST 2001 speaker verification database confirm our findings using VR-EER analysis. Furthermore, F-ratio is shown to be a valid criterion in place of EER as an evaluation criterion. We analysed four commonly encountered scenarios in biometric authentication which include fusing correlated/uncorrelated base-experts of similar/different performances. The analysis explains and shows that fusing systems of different performances is not always beneficial. One of the most important findings is that positive correlation ``hurts'' fusion while negative correlation (greater ``diversity'', which measures the spread of prediction score with respect to the fused score), improves fusion. However, by linking the concept of ambiguity decomposition to classification problem, it is found that diversity is not sufficient to be an evaluation criterion (to compare several fusion systems), unless measures are taken to normalise the (class-dependent) variance. Moreover, by linking the concept of bias-variance-covariance decomposition to classification using EER, it is found that if the inherent mismatch (between training and test sessions) can be learned from the data, such mismatch can be incorporated into the fusion system as a part of training parameters

    Wavelet–Based Face Recognition Schemes

    Get PDF

    A Study of the Effects of Score Normalisation Prior to Fusion in Biometric Authentication Tasks

    Get PDF
    Although the subject of fusion is well studied, the effects of normalisation prior to fusion are somewhat less well investigated. In this study, four normalisation techniques and six commonly used fusion classifiers were examined. Based on 24 (fusion classifiers) as a result of pairing the normalisation techniques and classifiers applied on 32 fusion data sets, 4x6x32 means 768 fusion experiments were carried out on the XM2VTS score-level fusion benchmark database, it can be concluded that trainable fusion classifiers are potentially useful. It is found that some classifiers are very sensitive (in terms of Half Total Error Rate) to normalisation techniques such as Weighted sum with weights optimised using Fisher-ratio and Decision Template. The mean fusion operator and user-specific linear weight combination are relative less sensitive. It is also found that Support Vector Machines and Gaussian Mixture Model are the least sensitive to different normalisation techniques, while achieving the best generalisation performance. For these two techniques, score normalisation is unnecessary prior to fusion

    Activity Report 2004

    Get PDF
    • 

    corecore