1,258 research outputs found

    Bayesian analysis of fingerprint, face and signature evidences with automatic biometric systems

    Full text link
    This is the author’s version of a work that was accepted for publication in Forensic Science International. Changes resulting from the publishing process, such as peer review, editing, corrections, structural formatting, and other quality control mechanisms may not be reflected in this document. Changes may have been made to this work since it was submitted for publication. A definitive version was subsequently published in Forensic Science International, Vol 155, Issue 2 (20 December 2005) DOI: 10.1016/j.forsciint.2004.11.007The Bayesian approach provides a unified and logical framework for the analysis of evidence and to provide results in the form of likelihood ratios (LR) from the forensic laboratory to court. In this contribution we want to clarify how the biometric scientist or laboratory can adapt their conventional biometric systems or technologies to work according to this Bayesian approach. Forensic systems providing their results in the form of LR will be assessed through Tippett plots, which give a clear representation of the LR-based performance both for targets (the suspect is the author/source of the test pattern) and non-targets. However, the computation procedures of the LR values, especially with biometric evidences, are still an open issue. Reliable estimation techniques showing good generalization properties for the estimation of the between- and within-source variabilities of the test pattern are required, as variance restriction techniques in the within-source density estimation to stand for the variability of the source with the course of time. Fingerprint, face and on-line signature recognition systems will be adapted to work according to this Bayesian approach showing both the likelihood ratios range in each application and the adequacy of these biometric techniques to the daily forensic work.This work has been partially supported under MCYT Projects TIC2000-1683, TIC2000-1669, TIC2003-09068, TIC2003-08382 and Spanish Police Force ‘‘Guardia Civil’’ Research Program

    Predictive biometrics: A review and analysis of predicting personal characteristics from biometric data

    Get PDF
    Interest in the exploitation of soft biometrics information has continued to develop over the last decade or so. In comparison with traditional biometrics, which focuses principally on person identification, the idea of soft biometrics processing is to study the utilisation of more general information regarding a system user, which is not necessarily unique. There are increasing indications that this type of data will have great value in providing complementary information for user authentication. However, the authors have also seen a growing interest in broadening the predictive capabilities of biometric data, encompassing both easily definable characteristics such as subject age and, most recently, `higher level' characteristics such as emotional or mental states. This study will present a selective review of the predictive capabilities, in the widest sense, of biometric data processing, providing an analysis of the key issues still adequately to be addressed if this concept of predictive biometrics is to be fully exploited in the future

    A response to “Likelihood ratio as weight of evidence: a closer look” by Lund and Iyer

    Get PDF
    Recently, Lund and Iyer (L&I) raised an argument regarding the use of likelihood ratios in court. In our view, their argument is based on a lack of understanding of the paradigm. L&I argue that the decision maker should not accept the expert’s likelihood ratio without further consideration. This is agreed by all parties. In normal practice, there is often considerable and proper exploration in court of the basis for any probabilistic statement. We conclude that L&I argue against a practice that does not exist and which no one advocates. Further we conclude that the most informative summary of evidential weight is the likelihood ratio. We state that this is the summary that should be presented to a court in every scientific assessment of evidential weight with supporting information about how it was constructed and on what it was based

    Validation of likelihood ratio methods for forensic evidence evaluation handling multimodal score distributions

    Full text link
    This paper is a postprint of a paper submitted to and accepted for publication in IET Biometrics and is subject to Institution of Engineering and Technology Copyright. The copy of record is available at IET Digital LibraryThis article presents a method for computing Likelihood Ratios (LR) from multimodal score distributions produced by an Automated Fingerprint Identification System (AFIS) feature extraction and comparison algorithm. The AFIS algorithm used to compare fingermarks and fingerprints was primarily developed for forensic investigation rather than for forensic evaluation. The computation of the scores is speed-optimized and performed on three different stages, each of which outputs discriminating scores of different magnitudes together forming a multimodal score distribution. It is worthy mentioning that each fingermark to fingerprint comparison performed by the AFIS algorithm results in one single similarity score (e.g. one score per comparison). The multimodal nature of the similarity scores can be typical for other biometric systems and the method proposed in this work can be applied in similar cases, where the multimodal nature in similarity scores is observed. In this work we address some of the problems related to modelling such distributions and propose solutions to issues like data sparsity, dataset shift and over-fitting. The issues mentioned affect the methods traditionally used in the situation when a multimodal nature in the similarity scores is observed (a Kernel Density Functions (KDF) was used to illustrate these issues in our case). Furthermore, the method proposed produces interpretable results in the situations when the similarity scores are sparse and traditional approaches lead to erroneous LRs of huge magnitudesThe research was conducted in scope of the BBfor2 – Marie Curie Initial Training Network (FP7-PEOPLE-ITN-2008 under the Grant Agreement 238803) at the Netherlands Forensic Institute in cooperation with the ATVS Biometric Recognition Group at the Universidad Autonoma de Madrid and the National Police Services Agency of the Netherland

    Automated dental identification: A micro-macro decision-making approach

    Get PDF
    Identification of deceased individuals based on dental characteristics is receiving increased attention, especially with the large volume of victims encountered in mass disasters. In this work we consider three important problems in automated dental identification beyond the basic approach of tooth-to-tooth matching.;The first problem is on automatic classification of teeth into incisors, canines, premolars and molars as part of creating a data structure that guides tooth-to-tooth matching, thus avoiding illogical comparisons that inefficiently consume the limited computational resources and may also mislead the decision-making. We tackle this problem using principal component analysis and string matching techniques. We reconstruct the segmented teeth using the eigenvectors of the image subspaces of the four teeth classes, and then call the teeth classes that achieve least energy-discrepancy between the novel teeth and their approximations. We exploit teeth neighborhood rules in validating teeth-classes and hence assign each tooth a number corresponding to its location in a dental chart. Our approach achieves 82% teeth labeling accuracy based on a large test dataset of bitewing films.;Because dental radiographic films capture projections of distinct teeth; and often multiple views for each of the distinct teeth, in the second problem we look for a scheme that exploits teeth multiplicity to achieve more reliable match decisions when we compare the dental records of a subject and a candidate match. Hence, we propose a hierarchical fusion scheme that utilizes both aspects of teeth multiplicity for improving teeth-level (micro) and case-level (macro) decision-making. We achieve a genuine accept rate in excess of 85%.;In the third problem we study the performance limits of dental identification due to features capabilities. We consider two types of features used in dental identification, namely teeth contours and appearance features. We propose a methodology for determining the number of degrees of freedom possessed by a feature set, as a figure of merit, based on modeling joint distributions using copulas under less stringent assumptions on the dependence between feature dimensions. We also offer workable approximations of this approach

    Information-theoretical comparison of evidence evaluation methods for score-based biometric systems

    Full text link
    Ponencia presentada en la Seventh International Conference on Forensic Inference and Statistics, The University of Lausanne, Switzerland, August 2008Biometric systems are a powerful tool in many forensic disciplines in order to aid scientists to evaluate the weight of the evidence. However, uprising requirements of admissibility in forensic science demand scientific methods in order to test the accuracy of the forensic evidence evaluation process. In this work we analyze and compare several evidence analysis methods for score-based biometric systems. For all of them, the score given by the system is transformed into a likelihood ratio ( LR) which expresses the weight of the evidence. The accuracy of each LR computation method will be assessed by classical Tippett plots- We also propose measuring accuracy in terms of average information given by the evidence evaluation process, by means of Empirical Cross-Entropy (EC-E) plots. Preliminary results are presented using a voice biometric system and the NIST SRE 2006 experimental protocol

    Classification of dental x-ray images

    Get PDF
    Forensic dentistry is concerned with identifying people based on their dental records. Forensic specialists have a large number of cases to investigate and hence, it has become important to automate forensic identification systems. The radiographs acquired after a person is deceased are called the Post-mortem (PM) radiographs, and the radiographs acquired while the person is alive are called the Ante-mortem (AM) radiographs. Dental biometrics automatically analyzes dental radiographs to identify the deceased individuals. While, ante mortem (AM) identification is usually possible through comparison of many biometric identifiers, postmortem (PM) identification is impossible using behavioral biometrics (e.g. speech, gait). Moreover, under severe circumstances, such as those encountered in mass disasters (e.g. airplane crashes and natural disasters such as Tsunami) most physiological biometrics may not be employed for identification, because of the decay of soft tissues of the body to unidentifiable states. Under such circumstances, the best candidates for postmortem biometric identification are the dental features because of their survivability and diversity.;In my work, I present two different techniques to classify periapical images as maxilla (upper jaw) or mandible (lower jaw) images and we show a third technique to classify dental bitewing images as horizontally flipped/rotated or horizontally un-flipped/un-rotated. In our first technique I present an algorithm to classify whether a given dental periapical image is of a maxilla (upper jaw) or a mandible (lower jaw) using texture analysis of the jaw bone. While the bone analysis method is manual, in our second technique, I propose an automated approach for the identification of dental periapical images using the crown curve detection Algorithm. The third proposed algorithm works in an automated manner for a large number of database comprised of dental bitewing images. Each dental bitewing image in the data base can be classified as a horizontally flipped or un-flipped image in a time efficient manner
    corecore