42 research outputs found

    UFPR-Periocular: A Periocular Dataset Collected by Mobile Devices in Unconstrained Scenarios

    Full text link
    Recently, ocular biometrics in unconstrained environments using images obtained at visible wavelength have gained the researchers' attention, especially with images captured by mobile devices. Periocular recognition has been demonstrated to be an alternative when the iris trait is not available due to occlusions or low image resolution. However, the periocular trait does not have the high uniqueness presented in the iris trait. Thus, the use of datasets containing many subjects is essential to assess biometric systems' capacity to extract discriminating information from the periocular region. Also, to address the within-class variability caused by lighting and attributes in the periocular region, it is of paramount importance to use datasets with images of the same subject captured in distinct sessions. As the datasets available in the literature do not present all these factors, in this work, we present a new periocular dataset containing samples from 1,122 subjects, acquired in 3 sessions by 196 different mobile devices. The images were captured under unconstrained environments with just a single instruction to the participants: to place their eyes on a region of interest. We also performed an extensive benchmark with several Convolutional Neural Network (CNN) architectures and models that have been employed in state-of-the-art approaches based on Multi-class Classification, Multitask Learning, Pairwise Filters Network, and Siamese Network. The results achieved in the closed- and open-world protocol, considering the identification and verification tasks, show that this area still needs research and development

    Cross-Spectral Full and Partial Face Recognition: Preprocessing, Feature Extraction and Matching

    Get PDF
    Cross-spectral face recognition remains a challenge in the area of biometrics. The problem arises from some real-world application scenarios such as surveillance at night time or in harsh environments, where traditional face recognition techniques are not suitable or limited due to usage of imagery obtained in the visible light spectrum. This motivates the study conducted in the dissertation which focuses on matching infrared facial images against visible light images. The study outspreads from aspects of face recognition such as preprocessing to feature extraction and to matching.;We address the problem of cross-spectral face recognition by proposing several new operators and algorithms based on advanced concepts such as composite operators, multi-level data fusion, image quality parity, and levels of measurement. To be specific, we experiment and fuse several popular individual operators to construct a higher-performed compound operator named GWLH which exhibits complementary advantages of involved individual operators. We also combine a Gaussian function with LBP, generalized LBP, WLD and/or HOG and modify them into multi-lobe operators with smoothed neighborhood to have a new type of operators named Composite Multi-Lobe Descriptors. We further design a novel operator termed Gabor Multi-Levels of Measurement based on the theory of levels of measurements, which benefits from taking into consideration the complementary edge and feature information at different levels of measurements.;The issue of image quality disparity is also studied in the dissertation due to its common occurrence in cross-spectral face recognition tasks. By bringing the quality of heterogeneous imagery closer to each other, we successfully achieve an improvement in the recognition performance. We further study the problem of cross-spectral recognition using partial face since it is also a common problem in practical usage. We begin with matching heterogeneous periocular regions and generalize the topic by considering all three facial regions defined in both a characteristic way and a mixture way.;In the experiments we employ datasets which include all the sub-bands within the infrared spectrum: near-infrared, short-wave infrared, mid-wave infrared, and long-wave infrared. Different standoff distances varying from short to intermediate and long are considered too. Our methods are compared with other popular or state-of-the-art methods and are proven to be advantageous

    Biometric presentation attack detection: beyond the visible spectrum

    Full text link
    The increased need for unattended authentication in multiple scenarios has motivated a wide deployment of biometric systems in the last few years. This has in turn led to the disclosure of security concerns specifically related to biometric systems. Among them, presentation attacks (PAs, i.e., attempts to log into the system with a fake biometric characteristic or presentation attack instrument) pose a severe threat to the security of the system: any person could eventually fabricate or order a gummy finger or face mask to impersonate someone else. In this context, we present a novel fingerprint presentation attack detection (PAD) scheme based on i) a new capture device able to acquire images within the short wave infrared (SWIR) spectrum, and i i) an in-depth analysis of several state-of-theart techniques based on both handcrafted and deep learning features. The approach is evaluated on a database comprising over 4700 samples, stemming from 562 different subjects and 35 different presentation attack instrument (PAI) species. The results show the soundness of the proposed approach with a detection equal error rate (D-EER) as low as 1.35% even in a realistic scenario where five different PAI species are considered only for testing purposes (i.e., unknown attacks

    One-Shot Learning for Periocular Recognition: Exploring the Effect of Domain Adaptation and Data Bias on Deep Representations

    Full text link
    One weakness of machine-learning algorithms is the need to train the models for a new task. This presents a specific challenge for biometric recognition due to the dynamic nature of databases and, in some instances, the reliance on subject collaboration for data collection. In this paper, we investigate the behavior of deep representations in widely used CNN models under extreme data scarcity for One-Shot periocular recognition, a biometric recognition task. We analyze the outputs of CNN layers as identity-representing feature vectors. We examine the impact of Domain Adaptation on the network layers' output for unseen data and evaluate the method's robustness concerning data normalization and generalization of the best-performing layer. We improved state-of-the-art results that made use of networks trained with biometric datasets with millions of images and fine-tuned for the target periocular dataset by utilizing out-of-the-box CNNs trained for the ImageNet Recognition Challenge and standard computer vision algorithms. For example, for the Cross-Eyed dataset, we could reduce the EER by 67% and 79% (from 1.70% and 3.41% to 0.56% and 0.71%) in the Close-World and Open-World protocols, respectively, for the periocular case. We also demonstrate that traditional algorithms like SIFT can outperform CNNs in situations with limited data or scenarios where the network has not been trained with the test classes like the Open-World mode. SIFT alone was able to reduce the EER by 64% and 71.6% (from 1.7% and 3.41% to 0.6% and 0.97%) for Cross-Eyed in the Close-World and Open-World protocols, respectively, and a reduction of 4.6% (from 3.94% to 3.76%) in the PolyU database for the Open-World and single biometric case.Comment: Submitted preprint to IEE Acces

    Cross-Spectral Periocular Recognition with Conditional Adversarial Networks

    Full text link
    This work addresses the challenge of comparing periocular images captured in different spectra, which is known to produce significant drops in performance in comparison to operating in the same spectrum. We propose the use of Conditional Generative Adversarial Networks, trained to con-vert periocular images between visible and near-infrared spectra, so that biometric verification is carried out in the same spectrum. The proposed setup allows the use of existing feature methods typically optimized to operate in a single spectrum. Recognition experiments are done using a number of off-the-shelf periocular comparators based both on hand-crafted features and CNN descriptors. Using the Hong Kong Polytechnic University Cross-Spectral Iris Images Database (PolyU) as benchmark dataset, our experiments show that cross-spectral performance is substantially improved if both images are converted to the same spectrum, in comparison to matching features extracted from images in different spectra. In addition to this, we fine-tune a CNN based on the ResNet50 architecture, obtaining a cross-spectral periocular performance of EER=1%, and GAR>99% @ FAR=1%, which is comparable to the state-of-the-art with the PolyU database.Comment: Accepted for publication at 2020 International Joint Conference on Biometrics (IJCB 2020
    corecore