166 research outputs found

    Cross-Spectral Periocular Recognition with Conditional Adversarial Networks

    Full text link
    This work addresses the challenge of comparing periocular images captured in different spectra, which is known to produce significant drops in performance in comparison to operating in the same spectrum. We propose the use of Conditional Generative Adversarial Networks, trained to con-vert periocular images between visible and near-infrared spectra, so that biometric verification is carried out in the same spectrum. The proposed setup allows the use of existing feature methods typically optimized to operate in a single spectrum. Recognition experiments are done using a number of off-the-shelf periocular comparators based both on hand-crafted features and CNN descriptors. Using the Hong Kong Polytechnic University Cross-Spectral Iris Images Database (PolyU) as benchmark dataset, our experiments show that cross-spectral performance is substantially improved if both images are converted to the same spectrum, in comparison to matching features extracted from images in different spectra. In addition to this, we fine-tune a CNN based on the ResNet50 architecture, obtaining a cross-spectral periocular performance of EER=1%, and GAR>99% @ FAR=1%, which is comparable to the state-of-the-art with the PolyU database.Comment: Accepted for publication at 2020 International Joint Conference on Biometrics (IJCB 2020

    Periocular Region-Based Biometric Identification

    Get PDF
    As biometrics become more prevalent in society, the research area is expected to address an ever widening field of problems and conditions. Traditional biometric modalities and approaches are reaching a state of maturity, and their limits are clearly defined. Since the needs of a biometric system administrator might extend beyond those limits, new modalities and techniques must address such concerns. The goal of the work presented here is to explore the periocular region, the region surrounding the eye, and evaluate its usability and limitations in addressing these concerns. First, a study of the periocular region was performed to examine its feasibility in addressing problems that affect traditional face- and iris-based biometric systems. Second, the physical structure of the periocular region was analyzed to determine the kinds of features found there and how they influence the performance of a biometric recognition system. Third, the use of local appearance based approaches in periocular recognition was explored. Lastly, the knowledge gained from the previous experiments was used to develop a novel feature representation technique that is specific to the periocular region. This work is significant because it provides a novel analysis of the features found in the periocular region and produces a feature extraction method that resulted in higher recognition performance over traditional techniques

    DeepMetricEye: Metric Depth Estimation in Periocular VR Imagery

    Full text link
    Despite the enhanced realism and immersion provided by VR headsets, users frequently encounter adverse effects such as digital eye strain (DES), dry eye, and potential long-term visual impairment due to excessive eye stimulation from VR displays and pressure from the mask. Recent VR headsets are increasingly equipped with eye-oriented monocular cameras to segment ocular feature maps. Yet, to compute the incident light stimulus and observe periocular condition alterations, it is imperative to transform these relative measurements into metric dimensions. To bridge this gap, we propose a lightweight framework derived from the U-Net 3+ deep learning backbone that we re-optimised, to estimate measurable periocular depth maps. Compatible with any VR headset equipped with an eye-oriented monocular camera, our method reconstructs three-dimensional periocular regions, providing a metric basis for related light stimulus calculation protocols and medical guidelines. Navigating the complexities of data collection, we introduce a Dynamic Periocular Data Generation (DPDG) environment based on UE MetaHuman, which synthesises thousands of training images from a small quantity of human facial scan data. Evaluated on a sample of 36 participants, our method exhibited notable efficacy in the periocular global precision evaluation experiment, and the pupil diameter measurement
    corecore