1,356 research outputs found

    Robust Minutiae Extractor: Integrating Deep Networks and Fingerprint Domain Knowledge

    Full text link
    We propose a fully automatic minutiae extractor, called MinutiaeNet, based on deep neural networks with compact feature representation for fast comparison of minutiae sets. Specifically, first a network, called CoarseNet, estimates the minutiae score map and minutiae orientation based on convolutional neural network and fingerprint domain knowledge (enhanced image, orientation field, and segmentation map). Subsequently, another network, called FineNet, refines the candidate minutiae locations based on score map. We demonstrate the effectiveness of using the fingerprint domain knowledge together with the deep networks. Experimental results on both latent (NIST SD27) and plain (FVC 2004) public domain fingerprint datasets provide comprehensive empirical support for the merits of our method. Further, our method finds minutiae sets that are better in terms of precision and recall in comparison with state-of-the-art on these two datasets. Given the lack of annotated fingerprint datasets with minutiae ground truth, the proposed approach to robust minutiae detection will be useful to train network-based fingerprint matching algorithms as well as for evaluating fingerprint individuality at scale. MinutiaeNet is implemented in Tensorflow: https://github.com/luannd/MinutiaeNetComment: Accepted to International Conference on Biometrics (ICB 2018

    Biometrics — Developments and Potential

    Get PDF
    This article describes the use of biometric technology in forensic science, for the development of new methods and tools, improving the current forensic biometric applications, and allowing for the creation of new ones. The article begins with a definition and a summary of the development of this field. It then describes the data and automated biometric modalities of interest in forensic science and the forensic applications embedding biometric technology. On this basis, it describes the solutions and limitations of the current practice regarding the data, the technology, and the inference models. Finally, it proposes research orientations for the improvement of the current forensic biometric applications and suggests some ideas for the development of some new forensic biometric applications

    Toward An Efficient Fingerprint Classification

    Get PDF

    The Proficiency of Experts

    Get PDF
    Expert evidence plays a crucial role in civil and criminal litigation. Changes in the rules concerning expert admissibility, following the Supreme Court\u27s Daubert ruling, strengthened judicial review of the reliability and the validity of an expert\u27s methods. Judges and scholars, however, have neglected the threshold question for expert evidence: whether a person should be qualified as an expert in the first place. Judges traditionally focus on credentials or experience when qualifying experts without regard to whether those criteria are good proxies for true expertise. We argue that credentials and experience are often poor proxies for proficiency. Qualification of an expert presumes that the witness can perform in a particular domain with a proficiency that non-experts cannot achieve, yet many experts cannot provide empirical evidence that they do in fact perform at high levels of proficiency. To demonstrate the importance ofproficiency data, we collect and analyze two decades of proficiency testing of latent fingerprint examiners. In this important domain, we found surprisingly high rates of false positive identifications for the period 1995 to 2016. These data would qualify the claims of many fingerprint examiners regarding their near infallibility, but unfortunately, judges do not seek out such information. We survey the federal and state case law and show how judges typically accept expert credentials as a proxy for proficiency in lieu of direct proof of proficiency. Indeed, judges often reject parties\u27 attempts to obtain and introduce at trial empirical data on an expert\u27s actual proficiency. We argue that any expert who purports to give falsifiable opinions can be subjected to proficiency testing and that proficiency testing is the only objective means of assessing the accuracy and reliability ofexperts who rely on subjective judgments to formulate their opinions (so-called black-box experts ). Judges should use proficiency data to make expert qualification decisions when the data is available, should demand proof of proficiency before qualifying black-box experts, and should admit at trial proficiency data for any qualified expert. We seek to revitalize the standard for qualifying experts: expertise should equal proficiency

    Psychometric Analysis of Forensic Examiner Behavior

    Get PDF
    Forensic science often involves the comparison of crime-scene evidence to a known-source sample to determine if the evidence and the reference sample came from the same source. Even as forensic analysis tools become increasingly objective and automated, final source identifications are often left to individual examiners' interpretation of the evidence. Each source identification relies on judgements about the features and quality of the crime-scene evidence that may vary from one examiner to the next. The current approach to characterizing uncertainty in examiners' decision-making has largely centered around the calculation of error rates aggregated across examiners and identification tasks, without taking into account these variations in behavior. We propose a new approach using IRT and IRT-like models to account for differences among examiners and additionally account for the varying difficulty among source identification tasks. In particular, we survey some recent advances (Luby, 2019a) in the application of Bayesian psychometric models, including simple Rasch models as well as more elaborate decision tree models, to fingerprint examiner behavior

    A Survey of Fingerprint Classification Part I: Taxonomies on Feature Extraction Methods and Learning Models

    Get PDF
    This paper reviews the fingerprint classification literature looking at the problem from a double perspective. We first deal with feature extraction methods, including the different models considered for singular point detection and for orientation map extraction. Then, we focus on the different learning models considered to build the classifiers used to label new fingerprints. Taxonomies and classifications for the feature extraction, singular point detection, orientation extraction and learning methods are presented. A critical view of the existing literature have led us to present a discussion on the existing methods and their drawbacks such as difficulty in their reimplementation, lack of details or major differences in their evaluations procedures. On this account, an experimental analysis of the most relevant methods is carried out in the second part of this paper, and a new method based on their combination is presented.Research Projects CAB(CDTI) TIN2011-28488 TIN2013-40765Spanish Government FPU12/0490

    A survey of fingerprint classification Part I: taxonomies on feature extraction methods and learning models

    Get PDF
    This paper reviews the fingerprint classification literature looking at the problem from a double perspective. We first deal with feature extraction methods, including the different models considered for singular point detection and for orientation map extraction. Then, we focus on the different learning models considered to build the classifiers used to label new fingerprints. Taxonomies and classifications for the feature extraction, singular point detection, orientation extraction and learning methods are presented. A critical view of the existing literature have led us to present a discussion on the existing methods and their drawbacks such as difficulty in their reimplementation, lack of details or major differences in their evaluations procedures. On this account, an experimental analysis of the most relevant methods is carried out in the second part of this paper, and a new method based on their combination is presented.This work was supported by the Research Projects CAB(CDTI), TIN2011-28488, and TIN2013-40765-P.

    Fast computation of the performance evaluation of biometric systems: application to multibiometric

    Full text link
    The performance evaluation of biometric systems is a crucial step when designing and evaluating such systems. The evaluation process uses the Equal Error Rate (EER) metric proposed by the International Organization for Standardization (ISO/IEC). The EER metric is a powerful metric which allows easily comparing and evaluating biometric systems. However, the computation time of the EER is, most of the time, very intensive. In this paper, we propose a fast method which computes an approximated value of the EER. We illustrate the benefit of the proposed method on two applications: the computing of non parametric confidence intervals and the use of genetic algorithms to compute the parameters of fusion functions. Experimental results show the superiority of the proposed EER approximation method in term of computing time, and the interest of its use to reduce the learning of parameters with genetic algorithms. The proposed method opens new perspectives for the development of secure multibiometrics systems by speeding up their computation time.Comment: Future Generation Computer Systems (2012

    A Framework for Biometric and Interaction Performance Assessment of Automated Border Control Processes

    Get PDF
    Automated Border Control (ABC) in airports and land crossings utilize automated technology to verify passenger identity claims. Accuracy, interaction stability, user error, and the need for a harmonized approach to implementation are required. Two models proposed in this paper establish a global path through ABC processes. The first, the generic model, maps separately the enrolment and verification phases of an ABC scenario. This allows a standardization of the process and an exploration of variances and similarities between configurations across implementations. The second, the identity claim process, decomposes the verification phase of the generic model to an enhanced resolution of ABC implementations. Harnessing a human-biometric sensor interaction framework allows the identification and quantification of errors within the system's use, attributing these errors to either system performance or human interaction. Data from a live operational scenario are used to analyze behaviors, which aid in establishing what effect these have on system performance. Utilizing the proposed method will aid already established methods in improving the performance assessment of a system. Through analyzing interactions and possible behavioral scenarios from the live trial, it was observed that 30.96% of interactions included some major user error. Future development using our proposed framework will see technological advances for biometric systems that are able to categorize interaction errors and feedback appropriately
    corecore