171 research outputs found
UFPR-Periocular: A Periocular Dataset Collected by Mobile Devices in Unconstrained Scenarios
Recently, ocular biometrics in unconstrained environments using images
obtained at visible wavelength have gained the researchers' attention,
especially with images captured by mobile devices. Periocular recognition has
been demonstrated to be an alternative when the iris trait is not available due
to occlusions or low image resolution. However, the periocular trait does not
have the high uniqueness presented in the iris trait. Thus, the use of datasets
containing many subjects is essential to assess biometric systems' capacity to
extract discriminating information from the periocular region. Also, to address
the within-class variability caused by lighting and attributes in the
periocular region, it is of paramount importance to use datasets with images of
the same subject captured in distinct sessions. As the datasets available in
the literature do not present all these factors, in this work, we present a new
periocular dataset containing samples from 1,122 subjects, acquired in 3
sessions by 196 different mobile devices. The images were captured under
unconstrained environments with just a single instruction to the participants:
to place their eyes on a region of interest. We also performed an extensive
benchmark with several Convolutional Neural Network (CNN) architectures and
models that have been employed in state-of-the-art approaches based on
Multi-class Classification, Multitask Learning, Pairwise Filters Network, and
Siamese Network. The results achieved in the closed- and open-world protocol,
considering the identification and verification tasks, show that this area
still needs research and development
A Reminiscence of ”Mastermind”: Iris/Periocular Biometrics by ”In-Set” CNN Iterative Analysis
Convolutional neural networks (CNNs) have
emerged as the most popular classification models in biometrics
research. Under the discriminative paradigm of pattern
recognition, CNNs are used typically in one of two ways: 1)
verification mode (”are samples from the same person?”), where
pairs of images are provided to the network to distinguish
between genuine and impostor instances; and 2) identification
mode (”whom is this sample from?”), where appropriate feature
representations that map images to identities are found. This
paper postulates a novel mode for using CNNs in biometric
identification, by learning models that answer to the question ”is
the query’s identity among this set?”. The insight is a reminiscence
of the classical Mastermind game: by iteratively analysing the
network responses when multiple random samples of k gallery
elements are compared to the query, we obtain weakly correlated
matching scores that - altogether - provide solid cues to infer
the most likely identity. In this setting, identification is regarded
as a variable selection and regularization problem, with sparse
linear regression techniques being used to infer the matching
probability with respect to each gallery identity. As main strength,
this strategy is highly robust to outlier matching scores, which
are known to be a primary error source in biometric recognition.
Our experiments were carried out in full versions of two
well known irises near-infrared (CASIA-IrisV4-Thousand) and
periocular visible wavelength (UBIRIS.v2) datasets, and confirm
that recognition performance can be solidly boosted-up by the
proposed algorithm, when compared to the traditional working
modes of CNNs in biometrics.info:eu-repo/semantics/publishedVersio
PatchBMI-Net: Lightweight Facial Patch-based Ensemble for BMI Prediction
Due to an alarming trend related to obesity affecting 93.3 million adults in
the United States alone, body mass index (BMI) and body weight have drawn
significant interest in various health monitoring applications. Consequently,
several studies have proposed self-diagnostic facial image-based BMI prediction
methods for healthy weight monitoring. These methods have mostly used
convolutional neural network (CNN) based regression baselines, such as VGG19,
ResNet50, and Efficient-NetB0, for BMI prediction from facial images. However,
the high computational requirement of these heavy-weight CNN models limits
their deployment to resource-constrained mobile devices, thus deterring weight
monitoring using smartphones. This paper aims to develop a lightweight facial
patch-based ensemble (PatchBMI-Net) for BMI prediction to facilitate the
deployment and weight monitoring using smartphones. Extensive experiments on
BMI-annotated facial image datasets suggest that our proposed PatchBMI-Net
model can obtain Mean Absolute Error (MAE) in the range [3.58, 6.51] with a
size of about 3.3 million parameters. On cross-comparison with heavyweight
models, such as ResNet-50 and Xception, trained for BMI prediction from facial
images, our proposed PatchBMI-Net obtains equivalent MAE along with the model
size reduction of about 5.4x and the average inference time reduction of about
3x when deployed on Apple-14 smartphone. Thus, demonstrating performance
efficiency as well as low latency for on-device deployment and weight
monitoring using smartphone applications.Comment: 7 pages,3 figure
Cross-Spectral Periocular Recognition with Conditional Adversarial Networks
This work addresses the challenge of comparing periocular images captured in
different spectra, which is known to produce significant drops in performance
in comparison to operating in the same spectrum. We propose the use of
Conditional Generative Adversarial Networks, trained to con-vert periocular
images between visible and near-infrared spectra, so that biometric
verification is carried out in the same spectrum. The proposed setup allows the
use of existing feature methods typically optimized to operate in a single
spectrum. Recognition experiments are done using a number of off-the-shelf
periocular comparators based both on hand-crafted features and CNN descriptors.
Using the Hong Kong Polytechnic University Cross-Spectral Iris Images Database
(PolyU) as benchmark dataset, our experiments show that cross-spectral
performance is substantially improved if both images are converted to the same
spectrum, in comparison to matching features extracted from images in different
spectra. In addition to this, we fine-tune a CNN based on the ResNet50
architecture, obtaining a cross-spectral periocular performance of EER=1%, and
GAR>99% @ FAR=1%, which is comparable to the state-of-the-art with the PolyU
database.Comment: Accepted for publication at 2020 International Joint Conference on
Biometrics (IJCB 2020
- …