6 research outputs found

    Parallel versus Hierarchical Fusion of Extended Fingerprint Features

    Full text link
    2010 20th International Conference on Pattern Recognition, ICPR 2010, Istanbul, 23-26 August 2010Extended fingerprint features such as pores, dots and incipient ridges have been increasingly attracting attention from researchers and engineers working on automatic fingerprint recognition systems. A variety of methods have been proposed to combine these features with the traditional minutiae features. This paper comparatively analyses the parallel and hierarchical fusion approaches on a high resolution fingerprint image dataset. Based on the results, a novel and more effective hierarchical approach is presented for combining minutiae, pores, dots and incipient ridges.Department of ComputingRefereed conference pape

    A new algorithm for minutiae extraction and matching in fingerprint

    Get PDF
    This thesis was submitted for the degree of Doctor of Philosophy and awarded by Brunel University.A novel algorithm for fingerprint template formation and matching in automatic fingerprint recognition has been developed. At present, fingerprint is being considered as the dominant biometric trait among all other biometrics due to its wide range of applications in security and access control. Most of the commercially established systems use singularity point (SP) or ‘core’ point for fingerprint indexing and template formation. The efficiency of these systems heavily relies on the detection of the core and the quality of the image itself. The number of multiple SPs or absence of ‘core’ on the image can cause some anomalies in the formation of the template and may result in high False Acceptance Rate (FAR) or False Rejection Rate (FRR). Also the loss of actual minutiae or appearance of new or spurious minutiae in the scanned image can contribute to the error in the matching process. A more sophisticated algorithm is therefore necessary in the formation and matching of templates in order to achieve low FAR and FRR and to make the identification more accurate. The novel algorithm presented here does not rely on any ‘core’ or SP thus makes the structure invariant with respect to global rotation and translation. Moreover, it does not need orientation of the minutiae points on which most of the established algorithm are based. The matching methodology is based on the local features of each minutiae point such as distances to its nearest neighbours and their internal angle. Using a publicly available fingerprint database, the algorithm has been evaluated and compared with other benchmark algorithms. It has been found that the algorithm has performed better compared to others and has been able to achieve an error equal rate of 3.5%

    Latent Print Examination and Human Factors: Improving the Practice Through a Systems Approach: The Report of the Expert Working Group on Human Factors in Latent Print Analysis

    Get PDF
    Fingerprints have provided a valuable method of personal identification in forensic science and criminal investigations for more than 100 years. Fingerprints left at crime scenes generally are latent prints—unintentional reproductions of the arrangement of ridges on the skin made by the transfer of materials (such as amino acids, proteins, polypeptides, and salts) to a surface. Palms and the soles of feet also have friction ridge skin that can leave latent prints. The examination of a latent print consists of a series of steps involving a comparison of the latent print to a known (or exemplar) print. Courts have accepted latent print evidence for the past century. However, several high-profile cases in the United States and abroad have highlighted the fact that human errors can occur, and litigation and expressions of concern over the evidentiary reliability of latent print examinations and other forensic identification procedures has increased in the last decade. “Human factors” issues can arise in any experience- and judgment-based analytical process such as latent print examination. Inadequate training, extraneous knowledge about the suspects in the case or other matters, poor judgment, health problems, limitations of vision, complex technology, and stress are but a few factors that can contribute to errors. A lack of standards or quality control, poor management, insufficient resources, and substandard working conditions constitute other potentially contributing factors

    A Novel Convolutional Neural Network Pore-Based Fingerprint Recognition System

    Get PDF
    Biometrics play an important role in security measures, such as border control and online transactions, relying on traits like uniqueness and permanence. Among the different biometrics, the fingerprint stands out for their enduring nature and individual uniqueness. Fingerprint recognition systems traditionally rely on ridge patterns (Level 1) and minutiae (Level 2). However, these systems suffer from recognition accuracy with partial fingerprints. Level 3 features, such as pores, offer distinctive attributes crucial for individual identification, particularly with high-resolution acquisition devices. Moreover, the use of convolutional neural networks (CNNs) has significantly improved the accuracy in automatic feature extraction for biometric recognition. A CNN-based pore fingerprint recognition system consists of two main modules, pore detection and pore feature extraction and matching modules. The first module generates pixel intensity maps to determine the pore centroids, while the second module extracts relevant features of pores to generate pore representations for matching between query and template fingerprints. However, existing CNN architectures lack in generating deep-level discriminative feature and computational efficiency. Moreover, available knowledge on the pores has not been taken into consideration optimally for pore centroids and metrics other than Euclidean distance have not been explored for pore matching. The objective of this research is to develop a CNN-based pore fingerprint recognition scheme that is capable of providing a low-complexity and high-accuracy performance. The design of the CNN architecture of the two modules aimed at generating features at different hierarchical levels in residual frameworks and fusing them to produce comprehensive sets of discriminative features. Depthwise and depthwise separable convolution operations are judiciously used to keep the complexity of networks low. In the proposed pore centroid part, the knowledge of the variation of the pore characteristics is used. In the proposed pore matching scheme, a composite metric, encompassing the Euclidean distance, angle, and magnitudes difference between the vectors of pore representations, is proposed to measure the similarity between the pores in the query and template images. Extensive experiments are performed on fingerprint images from the benchmark PolyU High-Resolution-Fingerprint dataset to demonstrate the effectiveness of the various strategies developed and used in the proposed scheme for fingerprint recognition

    Classification with class-independent quality information for biometric verification

    Get PDF
    Biometric identity verification systems frequently face the challenges of non-controlled conditions of data acquisition. Under such conditions biometric signals may suffer from quality degradation due to extraneous, identity-independent factors. It has been demonstrated in numerous reports that a degradation of biometric signal quality is a frequent cause of significant deterioration of classification performance, also in multiple-classifier, multimodal systems, which systematically outperform their single-classifier counterparts. Seeking to improve the robustness of classifiers to degraded data quality, researchers started to introduce measures of signal quality into the classification process. In the existing approaches, the role of class-independent quality information is governed by intuitive rather than mathematical notions, resulting in a clearly drawn distinction between the single-, multiple-classifier and multimodal approaches. The application of quality measures in a multiple-classifier system has received far more attention, with a dominant intuitive notion that a classifier that has data of higher quality at its disposal ought to be more credible than a classifier that operates on noisy signals. In the case of single-classifier systems a quality-based selection of models, classifiers or thresholds has been proposed. In both cases, quality measures have the function of meta-information which supervises but not intervenes with the actual classifier or classifiers employed to assign class labels to modality-specific and class-selective features. In this thesis we argue that in fact the very same mechanism governs the use of quality measures in single- and multi-classifier systems alike, and we present a quantitative rather than intuitive perspective on the role of quality measures in classification. We notice the fact that for a given set of classification features and their fixed marginal distributions, the class separation in the joint feature space changes with the statistical dependencies observed between the individual features. The same effect applies to a feature space in which some of the features are class-independent. Consequently, we demonstrate that the class separation can be improved by augmenting the feature space with class-independent quality information, provided that it sports statistical dependencies on the class-selective features. We discuss how to construct classifier-quality measure ensembles in which the dependence between classification scores and the quality features helps decrease classification errors below those obtained using the classification scores alone. We propose Q – stack, a novel theoretical framework of improving classification with class-independent quality measures based on the concept of classifier stacking. In the scheme of Q – stack a classifier ensemble is used in which the first classifier layer is made of the baseline unimodal classifiers, and the second, stacked classifier operates on features composed of the normalized similarity scores and the relevant quality measures. We present Q – stack as a generalized framework of classification with quality information and we argue that previously proposed methods of classification with quality measures are its special cases. Further in this thesis we address the problem of estimating probability of single classification errors. We propose to employ the subjective Bayesian interpretation of single event probability as credence in the correctness of single classification decisions. We propose to apply the credence-based error predictor as a functional extension of the proposed Q – stack framework, where a Bayesian stacked classifier is employed. As such, the proposed method of credence estimation and error prediction inherits the benefit of seamless incorporation of quality information in the process of credence estimation. We propose a set of objective evaluation criteria for credence estimates, and we discuss how the proposed method can be applied together with an appropriate repair strategy to reduce classification errors to a desired target level. Finally, we demonstrate the application of Q – stack and its functional extension to single error prediction on the task of biometric identity verification using face and fingerprint modalities, and their multimodal combinations, using a real biometric database. We show that the use of the classification and error prediction methods proposed in this thesis allows for a systematic reduction of the error rates below those of the baseline classifiers

    CONTACTLESS FINGERPRINT BIOMETRICS: ACQUISITION, PROCESSING, AND PRIVACY PROTECTION

    Get PDF
    Biometrics is defined by the International Organization for Standardization (ISO) as \u201cthe automated recognition of individuals based on their behavioral and biological characteristics\u201d Examples of distinctive features evaluated by biometrics, called biometric traits, are behavioral characteristics like the signature, gait, voice, and keystroke, and biological characteristics like the fingerprint, face, iris, retina, hand geometry, palmprint, ear, and DNA. The biometric recognition is the process that permits to establish the identity of a person, and can be performed in two modalities: verification, and identification. The verification modality evaluates if the identity declared by an individual corresponds to the acquired biometric data. Differently, in the identification modality, the recognition application has to determine a person's identity by comparing the acquired biometric data with the information related to a set of individuals. Compared with traditional techniques used to establish the identity of a person, biometrics offers a greater confidence level that the authenticated individual is not impersonated by someone else. Traditional techniques, in fact, are based on surrogate representations of the identity, like tokens, smart cards, and passwords, which can easily be stolen or copied with respect to biometric traits. This characteristic permitted a wide diffusion of biometrics in different scenarios, like physical access control, government applications, forensic applications, logical access control to data, networks, and services. Most of the biometric applications, also called biometric systems, require performing the acquisition process in a highly controlled and cooperative manner. In order to obtain good quality biometric samples, the acquisition procedures of these systems need that the users perform deliberate actions, assume determinate poses, and stay still for a time period. Limitations regarding the applicative scenarios can also be present, for example the necessity of specific light and environmental conditions. Examples of biometric technologies that traditionally require constrained acquisitions are based on the face, iris, fingerprint, and hand characteristics. Traditional face recognition systems need that the users take a neutral pose, and stay still for a time period. Moreover, the acquisitions are based on a frontal camera and performed in controlled light conditions. Iris acquisitions are usually performed at a distance of less than 30 cm from the camera, and require that the user assume a defined pose and stay still watching the camera. Moreover they use near infrared illumination techniques, which can be perceived as dangerous for the health. Fingerprint recognition systems and systems based on the hand characteristics require that the users touch the sensor surface applying a proper and uniform pressure. The contact with the sensor is often perceived as unhygienic and/or associated to a police procedure. This kind of constrained acquisition techniques can drastically reduce the usability and social acceptance of biometric technologies, therefore decreasing the number of possible applicative contexts in which biometric systems could be used. In traditional fingerprint recognition systems, the usability and user acceptance are not the only negative aspects of the used acquisition procedures since the contact of the finger with the sensor platen introduces a security lack due to the release of a latent fingerprint on the touched surface, the presence of dirt on the surface of the finger can reduce the accuracy of the recognition process, and different pressures applied to the sensor platen can introduce non-linear distortions and low-contrast regions in the captured samples. Other crucial aspects that influence the social acceptance of biometric systems are associated to the privacy and the risks related to misuses of biometric information acquired, stored and transmitted by the systems. One of the most important perceived risks is related to the fact that the persons consider the acquisition of biometric traits as an exact permanent filing of their activities and behaviors, and the idea that the biometric systems can guarantee recognition accuracy equal to 100\% is very common. Other perceived risks consist in the use of the collected biometric data for malicious purposes, for tracing all the activities of the individuals, or for operating proscription lists. In order to increase the usability and the social acceptance of biometric systems, researchers are studying less-constrained biometric recognition techniques based on different biometric traits, for example, face recognition systems in surveillance applications, iris recognition techniques based on images captured at a great distance and on the move, and contactless technologies based on the fingerprint and hand characteristics. Other recent studies aim to reduce the real and perceived privacy risks, and consequently increase the social acceptance of biometric technologies. In this context, many studies regard methods that perform the identity comparison in the encrypted domain in order to prevent possible thefts and misuses of biometric data. The objective of this thesis is to research approaches able to increase the usability and social acceptance of biometric systems by performing less-constrained and highly accurate biometric recognitions in a privacy compliant manner. In particular, approaches designed for high security contexts are studied in order improve the existing technologies adopted in border controls, investigative, and governmental applications. Approaches based on low cost hardware configurations are also researched with the aim of increasing the number of possible applicative scenarios of biometric systems. The privacy compliancy is considered as a crucial aspect in all the studied applications. Fingerprint is specifically considered in this thesis, since this biometric trait is characterized by high distinctivity and durability, is the most diffused trait in the literature, and is adopted in a wide range of applicative contexts. The studied contactless biometric systems are based on one or more CCD cameras, can use two-dimensional or three-dimensional samples, and include privacy protection methods. The main goal of these systems is to perform accurate and privacy compliant recognitions in less-constrained applicative contexts with respect to traditional fingerprint biometric systems. Other important goals are the use of a wider fingerprint area with respect to traditional techniques, compatibility with the existing databases, usability, social acceptance, and scalability. The main contribution of this thesis consists in the realization of novel biometric systems based on contactless fingerprint acquisitions. In particular, different techniques for every step of the recognition process based on two-dimensional and three-dimensional samples have been researched. Novel techniques for the privacy protection of fingerprint data have also been designed. The studied approaches are multidisciplinary since their design and realization involved optical acquisition systems, multiple view geometry, image processing, pattern recognition, computational intelligence, statistics, and cryptography. The implemented biometric systems and algorithms have been applied to different biometric datasets describing a heterogeneous set of applicative scenarios. Results proved the feasibility of the studied approaches. In particular, the realized contactless biometric systems have been compared with traditional fingerprint recognition systems, obtaining positive results in terms of accuracy, usability, user acceptability, scalability, and security. Moreover, the developed techniques for the privacy protection of fingerprint biometric systems showed satisfactory performances in terms of security, accuracy, speed, and memory usage
    corecore