1,184 research outputs found
Face De-Spoofing: Anti-Spoofing via Noise Modeling
Many prior face anti-spoofing works develop discriminative models for
recognizing the subtle differences between live and spoof faces. Those
approaches often regard the image as an indivisible unit, and process it
holistically, without explicit modeling of the spoofing process. In this work,
motivated by the noise modeling and denoising algorithms, we identify a new
problem of face de-spoofing, for the purpose of anti-spoofing: inversely
decomposing a spoof face into a spoof noise and a live face, and then utilizing
the spoof noise for classification. A CNN architecture with proper constraints
and supervisions is proposed to overcome the problem of having no ground truth
for the decomposition. We evaluate the proposed method on multiple face
anti-spoofing databases. The results show promising improvements due to our
spoof noise modeling. Moreover, the estimated spoof noise provides a
visualization which helps to understand the added spoof noise by each spoof
medium.Comment: To appear in ECCV 2018. The first two authors contributed equally to
this wor
Learning Generalizable and Identity-Discriminative Representations for Face Anti-Spoofing
Face anti-spoofing (a.k.a presentation attack detection) has drawn growing
attention due to the high-security demand in face authentication systems.
Existing CNN-based approaches usually well recognize the spoofing faces when
training and testing spoofing samples display similar patterns, but their
performance would drop drastically on testing spoofing faces of unseen scenes.
In this paper, we try to boost the generalizability and applicability of these
methods by designing a CNN model with two major novelties. First, we propose a
simple yet effective Total Pairwise Confusion (TPC) loss for CNN training,
which enhances the generalizability of the learned Presentation Attack (PA)
representations. Secondly, we incorporate a Fast Domain Adaptation (FDA)
component into the CNN model to alleviate negative effects brought by domain
changes. Besides, our proposed model, which is named Generalizable Face
Authentication CNN (GFA-CNN), works in a multi-task manner, performing face
anti-spoofing and face recognition simultaneously. Experimental results show
that GFA-CNN outperforms previous face anti-spoofing approaches and also well
preserves the identity information of input face images.Comment: 8 pages; 8 figures; 4 table
Deep Tree Learning for Zero-shot Face Anti-Spoofing
Face anti-spoofing is designed to keep face recognition systems from
recognizing fake faces as the genuine users. While advanced face anti-spoofing
methods are developed, new types of spoof attacks are also being created and
becoming a threat to all existing systems. We define the detection of unknown
spoof attacks as Zero-Shot Face Anti-spoofing (ZSFA). Previous works of ZSFA
only study 1-2 types of spoof attacks, such as print/replay attacks, which
limits the insight of this problem. In this work, we expand the ZSFA problem to
a wide range of 13 types of spoof attacks, including print attack, replay
attack, 3D mask attacks, and so on. A novel Deep Tree Network (DTN) is proposed
to tackle the ZSFA. The tree is learned to partition the spoof samples into
semantic sub-groups in an unsupervised fashion. When a data sample arrives,
being know or unknown attacks, DTN routes it to the most similar spoof cluster,
and make the binary decision. In addition, to enable the study of ZSFA, we
introduce the first face anti-spoofing database that contains diverse types of
spoof attacks. Experiments show that our proposed method achieves the state of
the art on multiple testing protocols of ZSFA.Comment: To appear at CVPR 2019 as an oral presentatio
Deep convolutional neural networks for face and iris presentation attack detection: Survey and case study
Biometric presentation attack detection is gaining increasing attention.
Users of mobile devices find it more convenient to unlock their smart
applications with finger, face or iris recognition instead of passwords. In
this paper, we survey the approaches presented in the recent literature to
detect face and iris presentation attacks. Specifically, we investigate the
effectiveness of fine tuning very deep convolutional neural networks to the
task of face and iris antispoofing. We compare two different fine tuning
approaches on six publicly available benchmark datasets. Results show the
effectiveness of these deep models in learning discriminative features that can
tell apart real from fake biometric images with very low error rate.
Cross-dataset evaluation on face PAD showed better generalization than state of
the art. We also performed cross-dataset testing on iris PAD datasets in terms
of equal error rate which was not reported in literature before. Additionally,
we propose the use of a single deep network trained to detect both face and
iris attacks. We have not noticed accuracy degradation compared to networks
trained for only one biometric separately. Finally, we analyzed the learned
features by the network, in correlation with the image frequency components, to
justify its prediction decision.Comment: A preprint of a paper accepted by IET Biometrics journal and is
subject to Institution of Engineering and Technology Copyrigh
Learning Generalized Spoof Cues for Face Anti-spoofing
Many existing face anti-spoofing (FAS) methods focus on modeling the decision
boundaries for some predefined spoof types. However, the diversity of the spoof
samples including the unknown ones hinders the effective decision boundary
modeling and leads to weak generalization capability. In this paper, we
reformulate FAS in an anomaly detection perspective and propose a
residual-learning framework to learn the discriminative live-spoof differences
which are defined as the spoof cues. The proposed framework consists of a spoof
cue generator and an auxiliary classifier. The generator minimizes the spoof
cues of live samples while imposes no explicit constraint on those of spoof
samples to generalize well to unseen attacks. In this way, anomaly detection is
implicitly used to guide spoof cue generation, leading to discriminative
feature learning. The auxiliary classifier serves as a spoof cue amplifier and
makes the spoof cues more discriminative. We conduct extensive experiments and
the experimental results show the proposed method consistently outperforms the
state-of-the-art methods. The code will be publicly available at
https://github.com/vis-var/lgsc-for-fas.Comment: 16 page
Learn Convolutional Neural Network for Face Anti-Spoofing
Though having achieved some progresses, the hand-crafted texture features,
e.g., LBP [23], LBP-TOP [11] are still unable to capture the most
discriminative cues between genuine and fake faces. In this paper, instead of
designing feature by ourselves, we rely on the deep convolutional neural
network (CNN) to learn features of high discriminative ability in a supervised
manner. Combined with some data pre-processing, the face anti-spoofing
performance improves drastically. In the experiments, over 70% relative
decrease of Half Total Error Rate (HTER) is achieved on two challenging
datasets, CASIA [36] and REPLAY-ATTACK [7] compared with the state-of-the-art.
Meanwhile, the experimental results from inter-tests between two datasets
indicates CNN can obtain features with better generalization ability. Moreover,
the nets trained using combined data from two datasets have less biases between
two datasets.Comment: 8 pages, 9 figures, 7 table
Discriminative Representation Combinations for Accurate Face Spoofing Detection
Three discriminative representations for face presentation attack detection
are introduced in this paper. Firstly we design a descriptor called spatial
pyramid coding micro-texture (SPMT) feature to characterize local appearance
information. Secondly we utilize the SSD, which is a deep learning framework
for detection, to excavate context cues and conduct end-to-end face
presentation attack detection. Finally we design a descriptor called template
face matched binocular depth (TFBD) feature to characterize stereo structures
of real and fake faces. For accurate presentation attack detection, we also
design two kinds of representation combinations. Firstly, we propose a
decision-level cascade strategy to combine SPMT with SSD. Secondly, we use a
simple score fusion strategy to combine face structure cues (TFBD) with local
micro-texture features (SPMT). To demonstrate the effectiveness of our design,
we evaluate the representation combination of SPMT and SSD on three public
datasets, which outperforms all other state-of-the-art methods. In addition, we
evaluate the representation combination of SPMT and TFBD on our dataset and
excellent performance is also achieved.Comment: To be published in Pattern Recognitio
Federated Face Presentation Attack Detection
Face presentation attack detection (fPAD) plays a critical role in the modern
face recognition pipeline. A face presentation attack detection model with good
generalization can be obtained when it is trained with face images from
different input distributions and different types of spoof attacks. In reality,
training data (both real face images and spoof images) are not directly shared
between data owners due to legal and privacy issues. In this paper, with the
motivation of circumventing this challenge, we propose Federated Face
Presentation Attack Detection (FedPAD) framework. FedPAD simultaneously takes
advantage of rich fPAD information available at different data owners while
preserving data privacy. In the proposed framework, each data owner (referred
to as \textit{data centers}) locally trains its own fPAD model. A server learns
a global fPAD model by iteratively aggregating model updates from all data
centers without accessing private data in each of them. Once the learned global
model converges, it is used for fPAD inference. We introduce the experimental
setting to evaluate the proposed FedPAD framework and carry out extensive
experiments to provide various insights about federated learning for fPAD
Deep Anomaly Detection for Generalized Face Anti-Spoofing
Face recognition has achieved unprecedented results, surpassing human
capabilities in certain scenarios. However, these automatic solutions are not
ready for production because they can be easily fooled by simple identity
impersonation attacks. And although much effort has been devoted to develop
face anti-spoofing models, their generalization capacity still remains a
challenge in real scenarios. In this paper, we introduce a novel approach that
reformulates the Generalized Presentation Attack Detection (GPAD) problem from
an anomaly detection perspective. Technically, a deep metric learning model is
proposed, where a triplet focal loss is used as a regularization for a novel
loss coined "metric-softmax", which is in charge of guiding the learning
process towards more discriminative feature representations in an embedding
space. Finally, we demonstrate the benefits of our deep anomaly detection
architecture, by introducing a few-shot a posteriori probability estimation
that does not need any classifier to be trained on the learned features. We
conduct extensive experiments using the GRAD-GPAD framework that provides the
largest aggregated dataset for face GPAD. Results confirm that our approach is
able to outperform all the state-of-the-art methods by a considerable margin.Comment: To appear at CVPR19 (workshop
Face Presentation Attack Detection in Learned Color-liked Space
Face presentation attack detection (PAD) has become a thorny problem for
biometric systems and numerous countermeasures have been proposed to address
it. However, majority of them directly extract feature descriptors and
distinguish fake faces from the real ones in existing color spaces (e.g. RGB,
HSV and YCbCr). Unfortunately, it is unknown for us which color space is the
best or how to combine different spaces together. To make matters worse, the
real and fake faces are overlapped in existing color spaces. So, in this paper,
a learned distinguishable color-liked space is generated to deal with the
problem of face PAD. More specifically, we present an end-to-end deep learning
network that can map existing color spaces to a new learned color-liked space.
Inspired by the generator of generative adversarial network (GAN), the proposed
network consists of a space generator and a feature extractor. When training
the color-liked space, a new triplet combination mechanism of points-to-center
is explored to maximize interclass distance and minimize intraclass distance,
and also keep a safe margin between the real and presented fake faces.
Extensive experiments on two standard face PAD databases, i.e., Relay-Attack
and OULU-NPU, indicate that our proposed color-liked space analysis based
countermeasure significantly outperforms the state-of-the-art methods and show
excellent generalization capability
- …