2,143 research outputs found
On the Learning of Deep Local Features for Robust Face Spoofing Detection
Biometrics emerged as a robust solution for security systems. However, given
the dissemination of biometric applications, criminals are developing
techniques to circumvent them by simulating physical or behavioral traits of
legal users (spoofing attacks). Despite face being a promising characteristic
due to its universality, acceptability and presence of cameras almost
everywhere, face recognition systems are extremely vulnerable to such frauds
since they can be easily fooled with common printed facial photographs.
State-of-the-art approaches, based on Convolutional Neural Networks (CNNs),
present good results in face spoofing detection. However, these methods do not
consider the importance of learning deep local features from each facial
region, even though it is known from face recognition that each facial region
presents different visual aspects, which can also be exploited for face
spoofing detection. In this work we propose a novel CNN architecture trained in
two steps for such task. Initially, each part of the neural network learns
features from a given facial region. Afterwards, the whole model is fine-tuned
on the whole facial images. Results show that such pre-training step allows the
CNN to learn different local spoofing cues, improving the performance and the
convergence speed of the final model, outperforming the state-of-the-art
approaches
Deep convolutional neural networks for face and iris presentation attack detection: Survey and case study
Biometric presentation attack detection is gaining increasing attention.
Users of mobile devices find it more convenient to unlock their smart
applications with finger, face or iris recognition instead of passwords. In
this paper, we survey the approaches presented in the recent literature to
detect face and iris presentation attacks. Specifically, we investigate the
effectiveness of fine tuning very deep convolutional neural networks to the
task of face and iris antispoofing. We compare two different fine tuning
approaches on six publicly available benchmark datasets. Results show the
effectiveness of these deep models in learning discriminative features that can
tell apart real from fake biometric images with very low error rate.
Cross-dataset evaluation on face PAD showed better generalization than state of
the art. We also performed cross-dataset testing on iris PAD datasets in terms
of equal error rate which was not reported in literature before. Additionally,
we propose the use of a single deep network trained to detect both face and
iris attacks. We have not noticed accuracy degradation compared to networks
trained for only one biometric separately. Finally, we analyzed the learned
features by the network, in correlation with the image frequency components, to
justify its prediction decision.Comment: A preprint of a paper accepted by IET Biometrics journal and is
subject to Institution of Engineering and Technology Copyrigh
Improving Face Anti-Spoofing by 3D Virtual Synthesis
Face anti-spoofing is crucial for the security of face recognition systems.
Learning based methods especially deep learning based methods need large-scale
training samples to reduce overfitting. However, acquiring spoof data is very
expensive since the live faces should be re-printed and re-captured in many
views. In this paper, we present a method to synthesize virtual spoof data in
3D space to alleviate this problem. Specifically, we consider a printed photo
as a flat surface and mesh it into a 3D object, which is then randomly bent and
rotated in 3D space. Afterward, the transformed 3D photo is rendered through
perspective projection as a virtual sample. The synthetic virtual samples can
significantly boost the anti-spoofing performance when combined with a proposed
data balancing strategy. Our promising results open up new possibilities for
advancing face anti-spoofing using cheap and large-scale synthetic data.Comment: Accepted to ICB 201
Discriminative Representation Combinations for Accurate Face Spoofing Detection
Three discriminative representations for face presentation attack detection
are introduced in this paper. Firstly we design a descriptor called spatial
pyramid coding micro-texture (SPMT) feature to characterize local appearance
information. Secondly we utilize the SSD, which is a deep learning framework
for detection, to excavate context cues and conduct end-to-end face
presentation attack detection. Finally we design a descriptor called template
face matched binocular depth (TFBD) feature to characterize stereo structures
of real and fake faces. For accurate presentation attack detection, we also
design two kinds of representation combinations. Firstly, we propose a
decision-level cascade strategy to combine SPMT with SSD. Secondly, we use a
simple score fusion strategy to combine face structure cues (TFBD) with local
micro-texture features (SPMT). To demonstrate the effectiveness of our design,
we evaluate the representation combination of SPMT and SSD on three public
datasets, which outperforms all other state-of-the-art methods. In addition, we
evaluate the representation combination of SPMT and TFBD on our dataset and
excellent performance is also achieved.Comment: To be published in Pattern Recognitio
Face De-Spoofing: Anti-Spoofing via Noise Modeling
Many prior face anti-spoofing works develop discriminative models for
recognizing the subtle differences between live and spoof faces. Those
approaches often regard the image as an indivisible unit, and process it
holistically, without explicit modeling of the spoofing process. In this work,
motivated by the noise modeling and denoising algorithms, we identify a new
problem of face de-spoofing, for the purpose of anti-spoofing: inversely
decomposing a spoof face into a spoof noise and a live face, and then utilizing
the spoof noise for classification. A CNN architecture with proper constraints
and supervisions is proposed to overcome the problem of having no ground truth
for the decomposition. We evaluate the proposed method on multiple face
anti-spoofing databases. The results show promising improvements due to our
spoof noise modeling. Moreover, the estimated spoof noise provides a
visualization which helps to understand the added spoof noise by each spoof
medium.Comment: To appear in ECCV 2018. The first two authors contributed equally to
this wor
How far did we get in face spoofing detection?
The growing use of control access systems based on face recognition shed
light over the need for even more accurate systems to detect face spoofing
attacks. In this paper, an extensive analysis on face spoofing detection works
published in the last decade is presented. The analyzed works are categorized
by their fundamental parts, i.e., descriptors and classifiers. This structured
survey also brings the temporal evolution of the face spoofing detection field,
as well as a comparative analysis of the works considering the most important
public data sets in the field. The methodology followed in this work is
particularly relevant to observe trends in the existing approaches, to discuss
still opened issues, and to propose new perspectives for the future of face
spoofing detection
Deep Transfer Across Domains for Face Anti-spoofing
A practical face recognition system demands not only high recognition
performance, but also the capability of detecting spoofing attacks. While
emerging approaches of face anti-spoofing have been proposed in recent years,
most of them do not generalize well to new database. The generalization ability
of face anti-spoofing needs to be significantly improved before they can be
adopted by practical application systems. The main reason for the poor
generalization of current approaches is the variety of materials among the
spoofing devices. As the attacks are produced by putting a spoofing display
(e.t., paper, electronic screen, forged mask) in front of a camera, the variety
of spoofing materials can make the spoofing attacks quite different.
Furthermore, the background/lighting condition of a new environment can make
both the real accesses and spoofing attacks different. Another reason for the
poor generalization is that limited labeled data is available for training in
face anti-spoofing. In this paper, we focus on improving the generalization
ability across different kinds of datasets. We propose a CNN framework using
sparsely labeled data from the target domain to learn features that are
invariant across domains for face anti-spoofing. Experiments on public-domain
face spoofing databases show that the proposed method significantly improve the
cross-dataset testing performance only with a small number of labeled samples
from the target domain.Comment: 8 pages; 3 figures; 2 table
FaceSpoof Buster: a Presentation Attack Detector Based on Intrinsic Image Properties and Deep Learning
Nowadays, the adoption of face recognition for biometric authentication
systems is usual, mainly because this is one of the most accessible biometric
modalities. Techniques that rely on trespassing these kind of systems by using
a forged biometric sample, such as a printed paper or a recorded video of a
genuine access, are known as presentation attacks, but may be also referred in
the literature as face spoofing. Presentation attack detection is a crucial
step for preventing this kind of unauthorized accesses into restricted areas
and/or devices. In this paper, we propose a novel approach which relies in a
combination between intrinsic image properties and deep neural networks to
detect presentation attack attempts. Our method explores depth, salience and
illumination maps, associated with a pre-trained Convolutional Neural Network
in order to produce robust and discriminant features. Each one of these
properties are individually classified and, in the end of the process, they are
combined by a meta learning classifier, which achieves outstanding results on
the most popular datasets for PAD. Results show that proposed method is able to
overpass state-of-the-art results in an inter-dataset protocol, which is
defined as the most challenging in the literature.Comment: 7 pages, 1 figure, 7 table
Deep Anomaly Detection for Generalized Face Anti-Spoofing
Face recognition has achieved unprecedented results, surpassing human
capabilities in certain scenarios. However, these automatic solutions are not
ready for production because they can be easily fooled by simple identity
impersonation attacks. And although much effort has been devoted to develop
face anti-spoofing models, their generalization capacity still remains a
challenge in real scenarios. In this paper, we introduce a novel approach that
reformulates the Generalized Presentation Attack Detection (GPAD) problem from
an anomaly detection perspective. Technically, a deep metric learning model is
proposed, where a triplet focal loss is used as a regularization for a novel
loss coined "metric-softmax", which is in charge of guiding the learning
process towards more discriminative feature representations in an embedding
space. Finally, we demonstrate the benefits of our deep anomaly detection
architecture, by introducing a few-shot a posteriori probability estimation
that does not need any classifier to be trained on the learned features. We
conduct extensive experiments using the GRAD-GPAD framework that provides the
largest aggregated dataset for face GPAD. Results confirm that our approach is
able to outperform all the state-of-the-art methods by a considerable margin.Comment: To appear at CVPR19 (workshop
Learning Generalized Spoof Cues for Face Anti-spoofing
Many existing face anti-spoofing (FAS) methods focus on modeling the decision
boundaries for some predefined spoof types. However, the diversity of the spoof
samples including the unknown ones hinders the effective decision boundary
modeling and leads to weak generalization capability. In this paper, we
reformulate FAS in an anomaly detection perspective and propose a
residual-learning framework to learn the discriminative live-spoof differences
which are defined as the spoof cues. The proposed framework consists of a spoof
cue generator and an auxiliary classifier. The generator minimizes the spoof
cues of live samples while imposes no explicit constraint on those of spoof
samples to generalize well to unseen attacks. In this way, anomaly detection is
implicitly used to guide spoof cue generation, leading to discriminative
feature learning. The auxiliary classifier serves as a spoof cue amplifier and
makes the spoof cues more discriminative. We conduct extensive experiments and
the experimental results show the proposed method consistently outperforms the
state-of-the-art methods. The code will be publicly available at
https://github.com/vis-var/lgsc-for-fas.Comment: 16 page
- …