2,919 research outputs found

    Fast computation of the performance evaluation of biometric systems: application to multibiometric

    Full text link
    The performance evaluation of biometric systems is a crucial step when designing and evaluating such systems. The evaluation process uses the Equal Error Rate (EER) metric proposed by the International Organization for Standardization (ISO/IEC). The EER metric is a powerful metric which allows easily comparing and evaluating biometric systems. However, the computation time of the EER is, most of the time, very intensive. In this paper, we propose a fast method which computes an approximated value of the EER. We illustrate the benefit of the proposed method on two applications: the computing of non parametric confidence intervals and the use of genetic algorithms to compute the parameters of fusion functions. Experimental results show the superiority of the proposed EER approximation method in term of computing time, and the interest of its use to reduce the learning of parameters with genetic algorithms. The proposed method opens new perspectives for the development of secure multibiometrics systems by speeding up their computation time.Comment: Future Generation Computer Systems (2012

    The Role of Test Administrator and Error

    Get PDF
    This study created a framework to quantify and mitigate the amount of error that test administrators introduced to a biometric system during data collection. Prior research has focused only on the subject and the errors they make when interacting with biometric systems, while ignoring the test administrator. This study used a longitudinal data collection, focusing on demographics in government identification forms such as driver\u27s licenses, fingerprint metadata such a moisture and skin temperature, and face image compliance to an ISO best practice standard. Error was quantified from the first visit and baseline test administrator error rates were measured. Additional training, software development, and error mitigation techniques were introduced before a second visit, in which the error rates were measured again. The new system greatly reduced the amount of test administrator error and improved the integrity of the data collected. Findings from this study show how to measure test administrator error and how to reduce it in future data collections

    A framework for forensic face recognition based on recognition performance calibrated for the quality of image pairs

    Get PDF
    Recently, it has been shown that performance of a face recognition system depends on the quality of both face images participating in the recognition process: the reference and the test image. In the context of forensic face recognition, this observation has two implications: a) the quality of the trace (extracted from CCTV footage) constrains the performance achievable using a particular face recognition system; b) the quality of the suspect reference set (to which the trace is matched against) can be judiciously chosen to approach optimal recognition performance under such a constraint. Motivated by these recent findings, we propose a framework for forensic face recognition that is based on calibrating the recognition performance for the quality of pairs of images. The application of this framework to several mock-up forensic cases, created entirely from the MultiPIE dataset, shows that optimal recognition performance, under such a constraint, can be achieved by matching the quality (pose, illumination, and, imaging device) of the reference set to that of the trace. This improvement in recognition performance helps reduce the rate of misleading interpretation of the evidence

    Verification, Analytical Validation, and Clinical Validation (V3): The Foundation of Determining Fit-for-Purpose for Biometric Monitoring Technologies (BioMeTs)

    Get PDF
    Digital medicine is an interdisciplinary field, drawing together stakeholders with expertize in engineering, manufacturing, clinical science, data science, biostatistics, regulatory science, ethics, patient advocacy, and healthcare policy, to name a few. Although this diversity is undoubtedly valuable, it can lead to confusion regarding terminology and best practices. There are many instances, as we detail in this paper, where a single term is used by different groups to mean different things, as well as cases where multiple terms are used to describe essentially the same concept. Our intent is to clarify core terminology and best practices for the evaluation of Biometric Monitoring Technologies (BioMeTs), without unnecessarily introducing new terms. We focus on the evaluation of BioMeTs as fit-for-purpose for use in clinical trials. However, our intent is for this framework to be instructional to all users of digital measurement tools, regardless of setting or intended use. We propose and describe a three-component framework intended to provide a foundational evaluation framework for BioMeTs. This framework includes (1) verification, (2) analytical validation, and (3) clinical validation. We aim for this common vocabulary to enable more effective communication and collaboration, generate a common and meaningful evidence base for BioMeTs, and improve the accessibility of the digital medicine field

    Hand data interchange format, standardization

    Get PDF
    To provide interoperability in storing and transmitting hand-geometry-related biometric information, one international standardhas been developed. Beyond this International Standard, other standards deal with conformance and quality control, aswell as interfaces or performance evaluation and reporting (see relevant entries in this Encyclopaedia for further information)
    • …
    corecore