179,325 research outputs found

    A Generative Model for Score Normalization in Speaker Recognition

    Full text link
    We propose a theoretical framework for thinking about score normalization, which confirms that normalization is not needed under (admittedly fragile) ideal conditions. If, however, these conditions are not met, e.g. under data-set shift between training and runtime, our theory reveals dependencies between scores that could be exploited by strategies such as score normalization. Indeed, it has been demonstrated over and over experimentally, that various ad-hoc score normalization recipes do work. We present a first attempt at using probability theory to design a generative score-space normalization model which gives similar improvements to ZT-norm on the text-dependent RSR 2015 database

    An Evaluation of Score Level Fusion Approaches for Fingerprint and Finger-vein Biometrics

    Get PDF
    Biometric systems have to address many requirements, such as large population coverage, demographic diversity, varied deployment environment, as well as practical aspects like performance and spoofing attacks. Traditional unimodal biometric systems do not fully meet the aforementioned requirements making them vulnerable and susceptible to different types of attacks. In response to that, modern biometric systems combine multiple biometric modalities at different fusion levels. The fused score is decisive to classify an unknown user as a genuine or impostor. In this paper, we evaluate combinations of score normalization and fusion techniques using two modalities (fingerprint and finger-vein) with the goal of identifying which one achieves better improvement rate over traditional unimodal biometric systems. The individual scores obtained from finger-veins and fingerprints are combined at score level using three score normalization techniques (min-max, z-score, hyperbolic tangent) and four score fusion approaches (minimum score, maximum score, simple sum, user weighting). The experimental results proved that the combination of hyperbolic tangent score normalization technique with the simple sum fusion approach achieve the best improvement rate of 99.98%.Comment: 10 pages, 5 figures, 3 tables, conference, NISK 201

    Linear Estimating Equations for Exponential Families with Application to Gaussian Linear Concentration Models

    Full text link
    In many families of distributions, maximum likelihood estimation is intractable because the normalization constant for the density which enters into the likelihood function is not easily available. The score matching estimator of Hyv\"arinen (2005) provides an alternative where this normalization constant is not required. The corresponding estimating equations become linear for an exponential family. The score matching estimator is shown to be consistent and asymptotically normally distributed for such models, although not necessarily efficient. Gaussian linear concentration models are examples of such families. For linear concentration models that are also linear in the covariance we show that the score matching estimator is identical to the maximum likelihood estimator, hence in such cases it is also efficient. Gaussian graphical models and graphical models with symmetries form particularly interesting subclasses of linear concentration models and we investigate the potential use of the score matching estimator for this case

    Score Normalization for Keystroke Dynamics Biometrics

    Get PDF
    Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works. A. Morales, E. Luna-Garcia, J. Fierrez and J. Ortega-Garcia, "Score normalization for keystroke dynamics biometrics," Security Technology (ICCST), 2015 International Carnahan Conference on, Taipei, 2015, pp. 223-228. doi: 10.1109/CCST.2015.7389686This paper analyzes score normalization for keystroke dynamics authentication systems. Previous studies have shown that the performance of behavioral biometric recognition systems (e.g. voice and signature) can be largely improved with score normalization and target-dependent techniques. The main objective of this work is twofold: i) to analyze the effects of different thresholding techniques in 4 different keystroke dynamics recognition systems for real operational scenarios; and ii) to improve the performance of keystroke dynamics on the basis of target-dependent score normalization techniques. The experiments included in this work are worked out over the keystroke pattern of 114 users from two different publicly available databases. The experiments show that there is large room for improvements in keystroke dynamic systems. The results suggest that score normalization techniques can be used to improve the performance of keystroke dynamics systems in more than 20%. These results encourage researchers to explore this research line to further improve the performance of these systems in real operational environments.A.M. is supported by a post-doctoral Juan de la Cierva contract by the Spanish MECD (JCI-2012-12357). This work has been partially supported by projects: Bio-Shield (TEC2012-34881) from Spanish MINECO, BEAT (FP7-SEC-284989) from EU, CECABANK and Cátedra UAM Telefónica

    Stroke Classification Comparison with KNN through Standardization and Normalization Techniques

    Get PDF
    This study explores the impact of z-score standardization and min-max normalization on K-Nearest Neighbors (KNN) classification for strokes. Focused on managing diverse scales in health attributes within the stroke dataset, the research aims to improve classification model accuracy and reliability. Preprocessing involves z-score standardization, min-max normalization, and no data scaling. The KNN model is trained and evaluated using various methods. Results reveal comparable performance between z-score standardization and min-max normalization, with slight variations across data split ratios. Demonstrating the importance of data scaling, both z-score and min-max achieve 95.07% accuracy. Notably, normalization averages a higher accuracy (94.25%) than standardization (94.21%), highlighting the critical role of data scaling for robust machine learning performance and informed health decisions

    Comparison of Min-Max normalization and Z-Score Normalization in the K-nearest neighbor (kNN) Algorithm to Test the Accuracy of Types of Breast Cancer

    Get PDF
    The purpose of this study was to examine the results of the prediction of breast cancer, which have been classified based on two types of breast cancer, malignant and benign. The method used in this research is the k-NN algorithm with normalization of min-max and Z-score, the programming language used is the R language. The conclusion is that the highest k accuracy value is k = 5 and k = 21 with an accuracy rate of 98% in the normalization method using the min-max method. Whereas for the Z-score method the highest accuracy is at k = 5 and k = 15 with an accuracy rate of 97%. Thus the min-max normalization method in this study is considered better than the normalization method using the Z-score. The novelty of this research lies in the comparison between the two min-max normalizations and the Z-score normalization in the k-NN algorithm

    Machine learning, unsupervised learning and stain normalization in digital nephropathology

    Get PDF
    Chronic kidney disease is a serious health challenge and still, the field of study lacks awareness and funding. Improving the efficiency of diagnosing chronic disease is important. Machine learning can be used for various tasks in order to make CKD diagnosis more efficient. If the disease is discovered quickly it can be possible to reverse changes. In this project, we explore techniques that can improve clustering of glomeruli images. The current thesis evaluates the effects of applying stain normalization to nephropathological data in order to improve unsupervised learning cluster- ing. A unsupervised learning pipeline was implemented in order to evaluate the effects of using stain normalization techniques with different reference images. The stain normalization techniques that were implemented are: Reinhard stain normalization, Macenko stain normalization and Structure preserving color normalization. The evaluation of these methods was done by measuring clustering results from the unsupervised learning pipeline, using the Adjusted Rand Index metric. The results indicate that using these techniques will increase the cluster agreement between results and true labels for the data. Six reference images were used for each stain nor- malization technique. The average Adjusted Rand Index score for all ref- erence images was increased using all three stain normalization techniques. The best performing method overall was the Reinhard stain normalization technique. This method gave both the highest single experiment and aver- age score. The other normalization methods both have one score close to zero (unsuccessful clustering), and structure preserving color normalization would outperform the Reinhard method if this single clustering was more successful.Chronic kidney disease is a serious health challenge and still, the field of study lacks awareness and funding. Improving the efficiency of diagnosing chronic disease is important. Machine learning can be used for various tasks in order to make CKD diagnosis more efficient. If the disease is discovered quickly it can be possible to reverse changes. In this project, we explore techniques that can improve clustering of glomeruli images. The current thesis evaluates the effects of applying stain normalization to nephropathological data in order to improve unsupervised learning cluster- ing. A unsupervised learning pipeline was implemented in order to evaluate the effects of using stain normalization techniques with different reference images. The stain normalization techniques that were implemented are: Reinhard stain normalization, Macenko stain normalization and Structure preserving color normalization. The evaluation of these methods was done by measuring clustering results from the unsupervised learning pipeline, using the Adjusted Rand Index metric. The results indicate that using these techniques will increase the cluster agreement between results and true labels for the data. Six reference images were used for each stain nor- malization technique. The average Adjusted Rand Index score for all ref- erence images was increased using all three stain normalization techniques. The best performing method overall was the Reinhard stain normalization technique. This method gave both the highest single experiment and aver- age score. The other normalization methods both have one score close to zero (unsuccessful clustering), and structure preserving color normalization would outperform the Reinhard method if this single clustering was more successful

    Performance Evaluation of User Independent Score Normalization Based Quadratic Function in Multimodal Biometric

    Get PDF
    Normalization is an essential step in multimodal biometric system that involves various nature and scale of outputs from different modalities before employing any fusion techniques. This paper proposes score normalization technique based on mapping function to increase the separation of score at overlap region and reduce the effect of overlap region on fusion algorithm. The effect of the proposed normalization technique on recognition system performance for different fusion methods is examined. Experiments on three different NIST databases suggest that integrating the proposed normalization technique with the classical simple rule fusion strategies (sum, min and max) and SVM-based fusion results significant improvement compared to other baseline normalization techniques used in this work

    A Monte-Carlo Method For Score Normalization in Automatic Speaker Verification Using Kullback-Leibler Distances

    Get PDF
    In this paper, we propose a new score normalization technique in Automatic Speaker Verification (ASV): the D-Norm. The main advantage of this score normalization is that it does not need any additional speech data nor external speaker population, as opposed to the state-ofthe-art approaches. The D-Norm is based on the use of Kullback-Leibler (KL) distances in an ASV context. In a first step, we estimate the KL distances with a Monte-Carlo method and we experimentally show that they are correlated with the verification scores. In a second step, we use this correlation to implement a score normalization procedure, the D-Norm. We analyse its performance and we compare it to that of a conventional normalization, the Z-Norm. The results show that performance of the D-Norm is comparable to that of the Z-Norm. We then conclude about the results we obtain and we discuss the applications of this work.
    corecore