454 research outputs found
UFPR-Periocular: A Periocular Dataset Collected by Mobile Devices in Unconstrained Scenarios
Recently, ocular biometrics in unconstrained environments using images
obtained at visible wavelength have gained the researchers' attention,
especially with images captured by mobile devices. Periocular recognition has
been demonstrated to be an alternative when the iris trait is not available due
to occlusions or low image resolution. However, the periocular trait does not
have the high uniqueness presented in the iris trait. Thus, the use of datasets
containing many subjects is essential to assess biometric systems' capacity to
extract discriminating information from the periocular region. Also, to address
the within-class variability caused by lighting and attributes in the
periocular region, it is of paramount importance to use datasets with images of
the same subject captured in distinct sessions. As the datasets available in
the literature do not present all these factors, in this work, we present a new
periocular dataset containing samples from 1,122 subjects, acquired in 3
sessions by 196 different mobile devices. The images were captured under
unconstrained environments with just a single instruction to the participants:
to place their eyes on a region of interest. We also performed an extensive
benchmark with several Convolutional Neural Network (CNN) architectures and
models that have been employed in state-of-the-art approaches based on
Multi-class Classification, Multitask Learning, Pairwise Filters Network, and
Siamese Network. The results achieved in the closed- and open-world protocol,
considering the identification and verification tasks, show that this area
still needs research and development
One-Shot Learning for Periocular Recognition: Exploring the Effect of Domain Adaptation and Data Bias on Deep Representations
One weakness of machine-learning algorithms is the need to train the models
for a new task. This presents a specific challenge for biometric recognition
due to the dynamic nature of databases and, in some instances, the reliance on
subject collaboration for data collection. In this paper, we investigate the
behavior of deep representations in widely used CNN models under extreme data
scarcity for One-Shot periocular recognition, a biometric recognition task. We
analyze the outputs of CNN layers as identity-representing feature vectors. We
examine the impact of Domain Adaptation on the network layers' output for
unseen data and evaluate the method's robustness concerning data normalization
and generalization of the best-performing layer. We improved state-of-the-art
results that made use of networks trained with biometric datasets with millions
of images and fine-tuned for the target periocular dataset by utilizing
out-of-the-box CNNs trained for the ImageNet Recognition Challenge and standard
computer vision algorithms. For example, for the Cross-Eyed dataset, we could
reduce the EER by 67% and 79% (from 1.70% and 3.41% to 0.56% and 0.71%) in the
Close-World and Open-World protocols, respectively, for the periocular case. We
also demonstrate that traditional algorithms like SIFT can outperform CNNs in
situations with limited data or scenarios where the network has not been
trained with the test classes like the Open-World mode. SIFT alone was able to
reduce the EER by 64% and 71.6% (from 1.7% and 3.41% to 0.6% and 0.97%) for
Cross-Eyed in the Close-World and Open-World protocols, respectively, and a
reduction of 4.6% (from 3.94% to 3.76%) in the PolyU database for the
Open-World and single biometric case.Comment: Submitted preprint to IEE Acces
BiOcularGAN: Bimodal Synthesis and Annotation of Ocular Images
Current state-of-the-art segmentation techniques for ocular images are
critically dependent on large-scale annotated datasets, which are
labor-intensive to gather and often raise privacy concerns. In this paper, we
present a novel framework, called BiOcularGAN, capable of generating synthetic
large-scale datasets of photorealistic (visible light and near-infrared) ocular
images, together with corresponding segmentation labels to address these
issues. At its core, the framework relies on a novel Dual-Branch StyleGAN2
(DB-StyleGAN2) model that facilitates bimodal image generation, and a Semantic
Mask Generator (SMG) component that produces semantic annotations by exploiting
latent features of the DB-StyleGAN2 model. We evaluate BiOcularGAN through
extensive experiments across five diverse ocular datasets and analyze the
effects of bimodal data generation on image quality and the produced
annotations. Our experimental results show that BiOcularGAN is able to produce
high-quality matching bimodal images and annotations (with minimal manual
intervention) that can be used to train highly competitive (deep) segmentation
models (in a privacy aware-manner) that perform well across multiple real-world
datasets. The source code for the BiOcularGAN framework is publicly available
at https://github.com/dariant/BiOcularGAN.Comment: 13 pages, 14 figure
Cross-Spectral Periocular Recognition with Conditional Adversarial Networks
This work addresses the challenge of comparing periocular images captured in
different spectra, which is known to produce significant drops in performance
in comparison to operating in the same spectrum. We propose the use of
Conditional Generative Adversarial Networks, trained to con-vert periocular
images between visible and near-infrared spectra, so that biometric
verification is carried out in the same spectrum. The proposed setup allows the
use of existing feature methods typically optimized to operate in a single
spectrum. Recognition experiments are done using a number of off-the-shelf
periocular comparators based both on hand-crafted features and CNN descriptors.
Using the Hong Kong Polytechnic University Cross-Spectral Iris Images Database
(PolyU) as benchmark dataset, our experiments show that cross-spectral
performance is substantially improved if both images are converted to the same
spectrum, in comparison to matching features extracted from images in different
spectra. In addition to this, we fine-tune a CNN based on the ResNet50
architecture, obtaining a cross-spectral periocular performance of EER=1%, and
GAR>99% @ FAR=1%, which is comparable to the state-of-the-art with the PolyU
database.Comment: Accepted for publication at 2020 International Joint Conference on
Biometrics (IJCB 2020
- …