737 research outputs found
How far did we get in face spoofing detection?
The growing use of control access systems based on face recognition shed
light over the need for even more accurate systems to detect face spoofing
attacks. In this paper, an extensive analysis on face spoofing detection works
published in the last decade is presented. The analyzed works are categorized
by their fundamental parts, i.e., descriptors and classifiers. This structured
survey also brings the temporal evolution of the face spoofing detection field,
as well as a comparative analysis of the works considering the most important
public data sets in the field. The methodology followed in this work is
particularly relevant to observe trends in the existing approaches, to discuss
still opened issues, and to propose new perspectives for the future of face
spoofing detection
IriTrack: Liveness Detection Using Irises Tracking for Preventing Face Spoofing Attacks
Face liveness detection has become a widely used technique with a growing
importance in various authentication scenarios to withstand spoofing attacks.
Existing methods that perform liveness detection generally focus on designing
intelligent classifiers or customized hardware to differentiate between the
image or video samples of a real legitimate user and the imitated ones.
Although effective, they can be resource-consuming and detection results may be
sensitive to environmental changes. In this paper, we take iris movement as a
significant liveness sign and propose a simple and efficient liveness detection
system named IriTrack. Users are required to move their eyes along with a
randomly generated poly-line, and trajectories of irises are then used as
evidences for liveness detection. IriTrack allows checking liveness by using
data collected during user-device interactions. We implemented a prototype and
conducted extensive experiments to evaluate the performance of the proposed
system. The results show that IriTrack can fend against spoofing attacks with a
moderate and adjustable time overhead
Discriminative Representation Combinations for Accurate Face Spoofing Detection
Three discriminative representations for face presentation attack detection
are introduced in this paper. Firstly we design a descriptor called spatial
pyramid coding micro-texture (SPMT) feature to characterize local appearance
information. Secondly we utilize the SSD, which is a deep learning framework
for detection, to excavate context cues and conduct end-to-end face
presentation attack detection. Finally we design a descriptor called template
face matched binocular depth (TFBD) feature to characterize stereo structures
of real and fake faces. For accurate presentation attack detection, we also
design two kinds of representation combinations. Firstly, we propose a
decision-level cascade strategy to combine SPMT with SSD. Secondly, we use a
simple score fusion strategy to combine face structure cues (TFBD) with local
micro-texture features (SPMT). To demonstrate the effectiveness of our design,
we evaluate the representation combination of SPMT and SSD on three public
datasets, which outperforms all other state-of-the-art methods. In addition, we
evaluate the representation combination of SPMT and TFBD on our dataset and
excellent performance is also achieved.Comment: To be published in Pattern Recognitio
Deep convolutional neural networks for face and iris presentation attack detection: Survey and case study
Biometric presentation attack detection is gaining increasing attention.
Users of mobile devices find it more convenient to unlock their smart
applications with finger, face or iris recognition instead of passwords. In
this paper, we survey the approaches presented in the recent literature to
detect face and iris presentation attacks. Specifically, we investigate the
effectiveness of fine tuning very deep convolutional neural networks to the
task of face and iris antispoofing. We compare two different fine tuning
approaches on six publicly available benchmark datasets. Results show the
effectiveness of these deep models in learning discriminative features that can
tell apart real from fake biometric images with very low error rate.
Cross-dataset evaluation on face PAD showed better generalization than state of
the art. We also performed cross-dataset testing on iris PAD datasets in terms
of equal error rate which was not reported in literature before. Additionally,
we propose the use of a single deep network trained to detect both face and
iris attacks. We have not noticed accuracy degradation compared to networks
trained for only one biometric separately. Finally, we analyzed the learned
features by the network, in correlation with the image frequency components, to
justify its prediction decision.Comment: A preprint of a paper accepted by IET Biometrics journal and is
subject to Institution of Engineering and Technology Copyrigh
3D Face Mask Presentation Attack Detection Based on Intrinsic Image Analysis
Face presentation attacks have become a major threat to face recognition
systems and many countermeasures have been proposed in the past decade.
However, most of them are devoted to 2D face presentation attacks, rather than
3D face masks. Unlike the real face, the 3D face mask is usually made of resin
materials and has a smooth surface, resulting in reflectance differences. So,
we propose a novel detection method for 3D face mask presentation attack by
modeling reflectance differences based on intrinsic image analysis. In the
proposed method, the face image is first processed with intrinsic image
decomposition to compute its reflectance image. Then, the intensity
distribution histograms are extracted from three orthogonal planes to represent
the intensity differences of reflectance images between the real face and 3D
face mask. After that, the 1D convolutional network is further used to capture
the information for describing different materials or surfaces react
differently to changes in illumination. Extensive experiments on the 3DMAD
database demonstrate the effectiveness of our proposed method in distinguishing
a face mask from the real one and show that the detection performance
outperforms other state-of-the-art methods
Federated Face Presentation Attack Detection
Face presentation attack detection (fPAD) plays a critical role in the modern
face recognition pipeline. A face presentation attack detection model with good
generalization can be obtained when it is trained with face images from
different input distributions and different types of spoof attacks. In reality,
training data (both real face images and spoof images) are not directly shared
between data owners due to legal and privacy issues. In this paper, with the
motivation of circumventing this challenge, we propose Federated Face
Presentation Attack Detection (FedPAD) framework. FedPAD simultaneously takes
advantage of rich fPAD information available at different data owners while
preserving data privacy. In the proposed framework, each data owner (referred
to as \textit{data centers}) locally trains its own fPAD model. A server learns
a global fPAD model by iteratively aggregating model updates from all data
centers without accessing private data in each of them. Once the learned global
model converges, it is used for fPAD inference. We introduce the experimental
setting to evaluate the proposed FedPAD framework and carry out extensive
experiments to provide various insights about federated learning for fPAD
Generalized Presentation Attack Detection: a face anti-spoofing evaluation proposal
Over the past few years, Presentation Attack Detection (PAD) has become a
fundamental part of facial recognition systems. Although much effort has been
devoted to anti-spoofing research, generalization in real scenarios remains a
challenge. In this paper we present a new open-source evaluation framework to
study the generalization capacity of face PAD methods, coined here as
face-GPAD. This framework facilitates the creation of new protocols focused on
the generalization problem establishing fair procedures of evaluation and
comparison between PAD solutions. We also introduce a large aggregated and
categorized dataset to address the problem of incompatibility between publicly
available datasets. Finally, we propose a benchmark adding two novel evaluation
protocols: one for measuring the effect introduced by the variations in face
resolution, and the second for evaluating the influence of adversarial
operating conditions.Comment: 8 pages, to appear at International Conference on Biometrics (ICB19
On the Learning of Deep Local Features for Robust Face Spoofing Detection
Biometrics emerged as a robust solution for security systems. However, given
the dissemination of biometric applications, criminals are developing
techniques to circumvent them by simulating physical or behavioral traits of
legal users (spoofing attacks). Despite face being a promising characteristic
due to its universality, acceptability and presence of cameras almost
everywhere, face recognition systems are extremely vulnerable to such frauds
since they can be easily fooled with common printed facial photographs.
State-of-the-art approaches, based on Convolutional Neural Networks (CNNs),
present good results in face spoofing detection. However, these methods do not
consider the importance of learning deep local features from each facial
region, even though it is known from face recognition that each facial region
presents different visual aspects, which can also be exploited for face
spoofing detection. In this work we propose a novel CNN architecture trained in
two steps for such task. Initially, each part of the neural network learns
features from a given facial region. Afterwards, the whole model is fine-tuned
on the whole facial images. Results show that such pre-training step allows the
CNN to learn different local spoofing cues, improving the performance and the
convergence speed of the final model, outperforming the state-of-the-art
approaches
Face Presentation Attack Detection in Learned Color-liked Space
Face presentation attack detection (PAD) has become a thorny problem for
biometric systems and numerous countermeasures have been proposed to address
it. However, majority of them directly extract feature descriptors and
distinguish fake faces from the real ones in existing color spaces (e.g. RGB,
HSV and YCbCr). Unfortunately, it is unknown for us which color space is the
best or how to combine different spaces together. To make matters worse, the
real and fake faces are overlapped in existing color spaces. So, in this paper,
a learned distinguishable color-liked space is generated to deal with the
problem of face PAD. More specifically, we present an end-to-end deep learning
network that can map existing color spaces to a new learned color-liked space.
Inspired by the generator of generative adversarial network (GAN), the proposed
network consists of a space generator and a feature extractor. When training
the color-liked space, a new triplet combination mechanism of points-to-center
is explored to maximize interclass distance and minimize intraclass distance,
and also keep a safe margin between the real and presented fake faces.
Extensive experiments on two standard face PAD databases, i.e., Relay-Attack
and OULU-NPU, indicate that our proposed color-liked space analysis based
countermeasure significantly outperforms the state-of-the-art methods and show
excellent generalization capability
Deep Tree Learning for Zero-shot Face Anti-Spoofing
Face anti-spoofing is designed to keep face recognition systems from
recognizing fake faces as the genuine users. While advanced face anti-spoofing
methods are developed, new types of spoof attacks are also being created and
becoming a threat to all existing systems. We define the detection of unknown
spoof attacks as Zero-Shot Face Anti-spoofing (ZSFA). Previous works of ZSFA
only study 1-2 types of spoof attacks, such as print/replay attacks, which
limits the insight of this problem. In this work, we expand the ZSFA problem to
a wide range of 13 types of spoof attacks, including print attack, replay
attack, 3D mask attacks, and so on. A novel Deep Tree Network (DTN) is proposed
to tackle the ZSFA. The tree is learned to partition the spoof samples into
semantic sub-groups in an unsupervised fashion. When a data sample arrives,
being know or unknown attacks, DTN routes it to the most similar spoof cluster,
and make the binary decision. In addition, to enable the study of ZSFA, we
introduce the first face anti-spoofing database that contains diverse types of
spoof attacks. Experiments show that our proposed method achieves the state of
the art on multiple testing protocols of ZSFA.Comment: To appear at CVPR 2019 as an oral presentatio
- …