1,672 research outputs found

    The Role of Test Administrator and Error

    Get PDF
    This study created a framework to quantify and mitigate the amount of error that test administrators introduced to a biometric system during data collection. Prior research has focused only on the subject and the errors they make when interacting with biometric systems, while ignoring the test administrator. This study used a longitudinal data collection, focusing on demographics in government identification forms such as driver\u27s licenses, fingerprint metadata such a moisture and skin temperature, and face image compliance to an ISO best practice standard. Error was quantified from the first visit and baseline test administrator error rates were measured. Additional training, software development, and error mitigation techniques were introduced before a second visit, in which the error rates were measured again. The new system greatly reduced the amount of test administrator error and improved the integrity of the data collected. Findings from this study show how to measure test administrator error and how to reduce it in future data collections

    The development of a test harness for biometric data collection and validation

    Get PDF
    Biometric test reports are an important tool in the evaluation of biometric systems, and therefore the data entered into the system needs to be of the highest integrity. Data collection, especially across multiple modalities, can be a challenging experience for test administrators. They have to ensure that the data are collected properly, the test subjects are treated appropriately, and the test plan is followed. Tests become more complex as the number of sensors are increased, and therefore it becomes increasingly important that a test harness be developed to improve the accuracy of the data collection. This paper describes the development of a test harness for a complex multi-sensor, multi-visit data collection, and explains the processes for the development of such a harness. The applicability of such a software package for the broader biometric community is also considered

    Finger data interchange format, standardization

    Get PDF
    To provide interoperability in storing and transmitting finger-related biometric information, 4 standards are already developed to define the formats needed for raw images, minutia-based feature vectors, spectral information, and skeletal representation of a fingerprint. Beyond that, other standards deal with conformance and quality control, as well as interfaces or performance evaluation and reporting (see relevant entries in this Encyclopaedia for further information)

    Toward a new data standard for combined marine biological and environmental datasets - expanding OBIS beyond species occurrences

    Get PDF
    The Ocean Biogeographic Information System (OBIS) is the world's most comprehensive online, open-access database of marine species distributions. OBIS grows with millions of new species observations every year. Contributions come from a network of hundreds of institutions, projects and individuals with common goals: to build a scientific knowledge base that is open to the public for scientific discovery and exploration and to detect trends and changes that inform society as essential elements in conservation management and sustainable development. Until now, OBIS has focused solely on the collection of biogeographic data (the presence of marine species in space and time) and operated with optimized data flows, quality control procedures and data standards specifically targeted to these data. Based on requirements from the growing OBIS community to manage datasets that combine biological, physical and chemical measurements, the OBIS-ENV-DATA pilot project was launched to develop a proposed standard and guidelines to make sure these combined datasets can stay together and are not, as is often the case, split and sent to different repositories. The proposal in this paper allows for the management of sampling methodology, animal tracking and telemetry data, biological measurements (e.g., body length, percent live cover, ...) as well as environmental measurements such as nutrient concentrations, sediment characteristics or other abiotic parameters measured during sampling to characterize the environment from which biogeographic data was collected. The recommended practice builds on the Darwin Core Archive (DwC-A) standard and on practices adopted by the Global Biodiversity Information Facility (GBIF). It consists of a DwC Event Core in combination with a DwC Occurrence Extension and a proposed enhancement to the DwC MeasurementOrFact Extension. This new structure enables the linkage of measurements or facts - quantitative and qualitative properties - to both sampling events and species occurrences, and includes additional fields for property standardization. We also embrace the use of the new parentEventID DwC term, which enables the creation of a sampling event hierarchy. We believe that the adoption of this recommended practice as a new data standard for managing and sharing biological and associated environmental datasets by IODE and the wider international scientific community would be key to improving the effectiveness of the knowledge base, and will enhance integration and management of critical data needed to understand ecological and biological processes in the ocean, and on land.Fil: De Pooter, Daphnis. Flanders Marine Institute; BélgicaFil: Appeltans, Ward. UNESCO-IOC; BélgicaFil: Bailly, Nicolas. Hellenic Centre for Marine Research, MedOBIS; GreciaFil: Bristol, Sky. United States Geological Survey; Estados UnidosFil: Deneudt, Klaas. Flanders Marine Institute; BélgicaFil: Eliezer, Menashè. Istituto Nazionale di Oceanografia e di Geofisica Sperimentale; ItaliaFil: Fujioka, Ei. University Of Duke. Nicholas School Of Environment. Duke Marine Lab; Estados UnidosFil: Giorgetti, Alessandra. Istituto Nazionale di Oceanografia e di Geofisica Sperimentale; ItaliaFil: Goldstein, Philip. University of Colorado Museum of Natural History, OBIS; Estados UnidosFil: Lewis, Mirtha Noemi. Consejo Nacional de Investigaciones Científicas y Técnicas. Centro Científico Tecnológico Conicet - Centro Nacional Patagónico. Centro para el Estudio de Sistemas Marinos; ArgentinaFil: Lipizer, Marina. Istituto Nazionale di Oceanografia e di Geofisica Sperimentale; ItaliaFil: Mackay, Kevin. National Institute of Water and Atmospheric Research; Nueva ZelandaFil: Marin, Maria Rosa. Consejo Nacional de Investigaciones Científicas y Técnicas. Centro Científico Tecnológico Conicet - Centro Nacional Patagónico; ArgentinaFil: Moncoiffé, Gwenaëlle. British Oceanographic Data Center; Reino UnidoFil: Nikolopoulou, Stamatina. Hellenic Centre for Marine Research, MedOBIS; GreciaFil: Provoost, Pieter. UNESCO-IOC; BélgicaFil: Rauch, Shannon. Woods Hole Oceanographic Institution; Estados UnidosFil: Roubicek, Andres. CSIRO Oceans and Atmosphere; AustraliaFil: Torres, Carlos. Universidad Autonoma de Baja California Sur; MéxicoFil: van de Putte, Anton. Royal Belgian Institute for Natural Sciences; BélgicaFil: Vandepitte, Leen. Flanders Marine Institute; BélgicaFil: Vanhoorne, Bart. Flanders Marine Institute; BélgicaFil: Vinci, Mateo. Istituto Nazionale di Oceanografia e di Geofisica Sperimentale; ItaliaFil: Wambiji, Nina. Kenya Marine and Fisheries Research Institute; KeniaFil: Watts, David. CSIRO Oceans and Atmosphere; AustraliaFil: Klein Salas, Eduardo. Universidad Simon Bolivar; VenezuelaFil: Hernandez, Francisco. Flanders Marine Institute; Bélgic

    Generation of Non-Deterministic Synthetic Face Datasets Guided by Identity Priors

    Get PDF
    Enabling highly secure applications (such as border crossing) with face recognition requires extensive biometric performance tests through large scale data. However, using real face images raises concerns about privacy as the laws do not allow the images to be used for other purposes than originally intended. Using representative and subsets of face data can also lead to unwanted demographic biases and cause an imbalance in datasets. One possible solution to overcome these issues is to replace real face images with synthetically generated samples. While generating synthetic images has benefited from recent advancements in computer vision, generating multiple samples of the same synthetic identity resembling real-world variations is still unaddressed, i.e., mated samples. This work proposes a non-deterministic method for generating mated face images by exploiting the well-structured latent space of StyleGAN. Mated samples are generated by manipulating latent vectors, and more precisely, we exploit Principal Component Analysis (PCA) to define semantically meaningful directions in the latent space and control the similarity between the original and the mated samples using a pre-trained face recognition system. We create a new dataset of synthetic face images (SymFace) consisting of 77,034 samples including 25,919 synthetic IDs. Through our analysis using well-established face image quality metrics, we demonstrate the differences in the biometric quality of synthetic samples mimicking characteristics of real biometric data. The analysis and results thereof indicate the use of synthetic samples created using the proposed approach as a viable alternative to replacing real biometric data

    Avoiding terminological confusion between the notions of 'biometrics' and 'biometric data':An investigation into the meanings of the terms from a European data protection and a scientific perspective

    Get PDF
    This article has been motivated by an observation: the lack of rigor by European bodies when they use scientific terms to address data protection and privacy issues raised by biometric technologies and biometric data. In particular, they improperly use the term ‘biometrics’ to mean at the same time ‘biometric data’, ‘identification method’, or ‘biometric technologies’.Based on this observation, there is a need to clarify what ‘biometrics’ means for the biometric community and whether and how the legal community should use the term in a data protection and privacy context.In parallel to that exercise of clarification, there is also a need to investigate the current legal definition of ‘biometric data’ as framed by European bodies at the level of the European Union and the Council of Europe.The comparison of the regulatory and scientific definitions of the term ‘biometric data’ reveals that the term is used in two different contexts. However, it is legitimate to question the role that the scientific definition could exercise on the regulatory definition. More precisely, the question is whether the technical process through which biometric information is extracted and transformed into a biometric template should be reflected in the regulatory definition of the term

    Verification, Analytical Validation, and Clinical Validation (V3): The Foundation of Determining Fit-for-Purpose for Biometric Monitoring Technologies (BioMeTs)

    Get PDF
    Digital medicine is an interdisciplinary field, drawing together stakeholders with expertize in engineering, manufacturing, clinical science, data science, biostatistics, regulatory science, ethics, patient advocacy, and healthcare policy, to name a few. Although this diversity is undoubtedly valuable, it can lead to confusion regarding terminology and best practices. There are many instances, as we detail in this paper, where a single term is used by different groups to mean different things, as well as cases where multiple terms are used to describe essentially the same concept. Our intent is to clarify core terminology and best practices for the evaluation of Biometric Monitoring Technologies (BioMeTs), without unnecessarily introducing new terms. We focus on the evaluation of BioMeTs as fit-for-purpose for use in clinical trials. However, our intent is for this framework to be instructional to all users of digital measurement tools, regardless of setting or intended use. We propose and describe a three-component framework intended to provide a foundational evaluation framework for BioMeTs. This framework includes (1) verification, (2) analytical validation, and (3) clinical validation. We aim for this common vocabulary to enable more effective communication and collaboration, generate a common and meaningful evidence base for BioMeTs, and improve the accessibility of the digital medicine field

    On the Applicability of Synthetic Data for Face Recognition

    Full text link
    Face verification has come into increasing focus in various applications including the European Entry/Exit System, which integrates face recognition mechanisms. At the same time, the rapid advancement of biometric authentication requires extensive performance tests in order to inhibit the discriminatory treatment of travellers due to their demographic background. However, the use of face images collected as part of border controls is restricted by the European General Data Protection Law to be processed for no other reason than its original purpose. Therefore, this paper investigates the suitability of synthetic face images generated with StyleGAN and StyleGAN2 to compensate for the urgent lack of publicly available large-scale test data. Specifically, two deep learning-based (SER-FIQ, FaceQnet v1) and one standard-based (ISO/IEC TR 29794-5) face image quality assessment algorithm is utilized to compare the applicability of synthetic face images compared to real face images extracted from the FRGC dataset. Finally, based on the analysis of impostor score distributions and utility score distributions, our experiments reveal negligible differences between StyleGAN vs. StyleGAN2, and further also minor discrepancies compared to real face images
    • …
    corecore