5 research outputs found

    Minutiae-based Fingerprint Extraction and Recognition

    Get PDF

    Biometrics

    Get PDF
    Biometrics uses methods for unique recognition of humans based upon one or more intrinsic physical or behavioral traits. In computer science, particularly, biometrics is used as a form of identity access management and access control. It is also used to identify individuals in groups that are under surveillance. The book consists of 13 chapters, each focusing on a certain aspect of the problem. The book chapters are divided into three sections: physical biometrics, behavioral biometrics and medical biometrics. The key objective of the book is to provide comprehensive reference and text on human authentication and people identity verification from both physiological, behavioural and other points of view. It aims to publish new insights into current innovations in computer systems and technology for biometrics development and its applications. The book was reviewed by the editor Dr. Jucheng Yang, and many of the guest editors, such as Dr. Girija Chetty, Dr. Norman Poh, Dr. Loris Nanni, Dr. Jianjiang Feng, Dr. Dongsun Park, Dr. Sook Yoon and so on, who also made a significant contribution to the book

    Fingerprint Recognition: A Histogram Analysis Based Fuzzy C-Means Multilevel Structural Approach

    Get PDF
    In order to fight identity fraud, the use of a reliable personal identifier has become a necessity. Fingerprints are considered one of the best biometric measurements and are used as a universal personal identifier. There are two main phases in the recognition of personal identity using fingerprints: 1) extraction of suitable features of fingerprints, and 2) fingerprint matching making use of the extracted features to find the correspondence and similarity between the fingerprint images. Use of global features in minutia-based fingerprint recognition schemes enhances their recognition capability but at the expense of a substantially increased complexity. The recognition accuracies of most of the fingerprint recognition schemes, which rely on some sort of crisp clustering of the fingerprint features, are adversely affected due to the problems associated with the behavioral and anatomical characteristics of the fingerprints. The objective of this research is to develop efficient and cost-effective techniques for fingerprint recognition, that can meet the challenges arising from using both the local and global features of the fingerprints as well as effectively deal with the problems resulting from the crisp clustering of the fingerprint features. To this end, the structural information of local and global features of fingerprints are used for their decomposition, representation and matching in a multilevel hierarchical framework. The problems associated with the crisp clustering of the fingerprint features are addressed by incorporating the ideas of fuzzy logic in developing the various stages of the proposed fingerprint recognition scheme. In the first part of this thesis, a novel low-complexity multilevel structural scheme for fingerprint recognition (MSFR) is proposed by first decomposing fingerprint images into regions based on crisp partitioning of some global features of the fingerprints. Then, multilevel feature vectors representing the structural information of the fingerprints are formulated by employing both the global and local features, and a fast multilevel matching algorithm using this representation is devised. Inspired by the ability of fuzzy-based clustering techniques in dealing more effectively with the natural patterns, in the second part of the thesis, a new fuzzy based clustering technique that can deal with the partitioning problem of the fingerprint having the behavioral and anatomical characteristics is proposed and then used to develop a fuzzy based multilevel structural fingerprint recognition scheme. First, a histogram analysis fuzzy c-means (HA-FCM) clustering technique is devised for the partitioning of the fingerprints. The parameters of this partitioning technique, i.e., the number of clusters and the set of initial cluster centers, are determined in an automated manner by employing the histogram of the fingerprint orientation field. The development of the HA-FCM partitioning scheme is further pursued to devise an enhanced HA-FCM (EAH-FCM) algorithm. In this algorithm, the smoothness of the fingerprint partitioning is improved through a regularization of the fingerprint orientation field, and the computational complexity is reduced by decreasing the number of operations and by increasing the convergence rate of the underlying iterative process of the HA-FCM technique. Finally, a new fuzzy based fingerprint recognition scheme (FMSFR), based on the EHA-FCM partitioning scheme and the basic ideas used in the development of the MSFR scheme, is proposed. Extensive experiments are conducted throughout this thesis using a number of challenging benchmark databases. These databases are selected from the FVC2002, FVC2004 and FVC2006 competitions containing a wide variety of challenges for fingerprint recognition. Simulation results demonstrate not only the effectiveness of the proposed techniques and schemes but also their superiority over some of the state-of-the-art techniques, in terms of the recognition accuracy and the computational complexity

    Classification with class-independent quality information for biometric verification

    Get PDF
    Biometric identity verification systems frequently face the challenges of non-controlled conditions of data acquisition. Under such conditions biometric signals may suffer from quality degradation due to extraneous, identity-independent factors. It has been demonstrated in numerous reports that a degradation of biometric signal quality is a frequent cause of significant deterioration of classification performance, also in multiple-classifier, multimodal systems, which systematically outperform their single-classifier counterparts. Seeking to improve the robustness of classifiers to degraded data quality, researchers started to introduce measures of signal quality into the classification process. In the existing approaches, the role of class-independent quality information is governed by intuitive rather than mathematical notions, resulting in a clearly drawn distinction between the single-, multiple-classifier and multimodal approaches. The application of quality measures in a multiple-classifier system has received far more attention, with a dominant intuitive notion that a classifier that has data of higher quality at its disposal ought to be more credible than a classifier that operates on noisy signals. In the case of single-classifier systems a quality-based selection of models, classifiers or thresholds has been proposed. In both cases, quality measures have the function of meta-information which supervises but not intervenes with the actual classifier or classifiers employed to assign class labels to modality-specific and class-selective features. In this thesis we argue that in fact the very same mechanism governs the use of quality measures in single- and multi-classifier systems alike, and we present a quantitative rather than intuitive perspective on the role of quality measures in classification. We notice the fact that for a given set of classification features and their fixed marginal distributions, the class separation in the joint feature space changes with the statistical dependencies observed between the individual features. The same effect applies to a feature space in which some of the features are class-independent. Consequently, we demonstrate that the class separation can be improved by augmenting the feature space with class-independent quality information, provided that it sports statistical dependencies on the class-selective features. We discuss how to construct classifier-quality measure ensembles in which the dependence between classification scores and the quality features helps decrease classification errors below those obtained using the classification scores alone. We propose Q – stack, a novel theoretical framework of improving classification with class-independent quality measures based on the concept of classifier stacking. In the scheme of Q – stack a classifier ensemble is used in which the first classifier layer is made of the baseline unimodal classifiers, and the second, stacked classifier operates on features composed of the normalized similarity scores and the relevant quality measures. We present Q – stack as a generalized framework of classification with quality information and we argue that previously proposed methods of classification with quality measures are its special cases. Further in this thesis we address the problem of estimating probability of single classification errors. We propose to employ the subjective Bayesian interpretation of single event probability as credence in the correctness of single classification decisions. We propose to apply the credence-based error predictor as a functional extension of the proposed Q – stack framework, where a Bayesian stacked classifier is employed. As such, the proposed method of credence estimation and error prediction inherits the benefit of seamless incorporation of quality information in the process of credence estimation. We propose a set of objective evaluation criteria for credence estimates, and we discuss how the proposed method can be applied together with an appropriate repair strategy to reduce classification errors to a desired target level. Finally, we demonstrate the application of Q – stack and its functional extension to single error prediction on the task of biometric identity verification using face and fingerprint modalities, and their multimodal combinations, using a real biometric database. We show that the use of the classification and error prediction methods proposed in this thesis allows for a systematic reduction of the error rates below those of the baseline classifiers
    corecore