1,646 research outputs found
Biometric presentation attack detection: beyond the visible spectrum
The increased need for unattended authentication in
multiple scenarios has motivated a wide deployment of biometric
systems in the last few years. This has in turn led to the
disclosure of security concerns specifically related to biometric
systems. Among them, presentation attacks (PAs, i.e., attempts
to log into the system with a fake biometric characteristic or
presentation attack instrument) pose a severe threat to the
security of the system: any person could eventually fabricate
or order a gummy finger or face mask to impersonate someone
else. In this context, we present a novel fingerprint presentation
attack detection (PAD) scheme based on i) a new capture device
able to acquire images within the short wave infrared (SWIR)
spectrum, and i i) an in-depth analysis of several state-of-theart
techniques based on both handcrafted and deep learning
features. The approach is evaluated on a database comprising
over 4700 samples, stemming from 562 different subjects and
35 different presentation attack instrument (PAI) species. The
results show the soundness of the proposed approach with a
detection equal error rate (D-EER) as low as 1.35% even in a
realistic scenario where five different PAI species are considered
only for testing purposes (i.e., unknown attacks
A Differential Approach for Gaze Estimation
Non-invasive gaze estimation methods usually regress gaze directions directly
from a single face or eye image. However, due to important variabilities in eye
shapes and inner eye structures amongst individuals, universal models obtain
limited accuracies and their output usually exhibit high variance as well as
biases which are subject dependent. Therefore, increasing accuracy is usually
done through calibration, allowing gaze predictions for a subject to be mapped
to his/her actual gaze. In this paper, we introduce a novel image differential
method for gaze estimation. We propose to directly train a differential
convolutional neural network to predict the gaze differences between two eye
input images of the same subject. Then, given a set of subject specific
calibration images, we can use the inferred differences to predict the gaze
direction of a novel eye sample. The assumption is that by allowing the
comparison between two eye images, annoyance factors (alignment, eyelid
closing, illumination perturbations) which usually plague single image
prediction methods can be much reduced, allowing better prediction altogether.
Experiments on 3 public datasets validate our approach which constantly
outperforms state-of-the-art methods even when using only one calibration
sample or when the latter methods are followed by subject specific gaze
adaptation.Comment: Extension to our paper A differential approach for gaze estimation
with calibration (BMVC 2018) Submitted to PAMI on Aug. 7th, 2018 Accepted by
PAMI short on Dec. 2019, in IEEE Transactions on Pattern Analysis and Machine
Intelligenc
- …