30 research outputs found
Learning Meta Model for Zero- and Few-shot Face Anti-spoofing
Face anti-spoofing is crucial to the security of face recognition systems.
Most previous methods formulate face anti-spoofing as a supervised learning
problem to detect various predefined presentation attacks, which need large
scale training data to cover as many attacks as possible. However, the trained
model is easy to overfit several common attacks and is still vulnerable to
unseen attacks. To overcome this challenge, the detector should: 1) learn
discriminative features that can generalize to unseen spoofing types from
predefined presentation attacks; 2) quickly adapt to new spoofing types by
learning from both the predefined attacks and a few examples of the new
spoofing types. Therefore, we define face anti-spoofing as a zero- and few-shot
learning problem. In this paper, we propose a novel Adaptive Inner-update Meta
Face Anti-Spoofing (AIM-FAS) method to tackle this problem through
meta-learning. Specifically, AIM-FAS trains a meta-learner focusing on the task
of detecting unseen spoofing types by learning from predefined living and
spoofing faces and a few examples of new attacks. To assess the proposed
approach, we propose several benchmarks for zero- and few-shot FAS. Experiments
show its superior performances on the presented benchmarks to existing methods
in existing zero-shot FAS protocols.Comment: Accepted by AAAI202
Learning Domain Invariant Information to Enhance Presentation Attack Detection in Visible Face Recognition Systems
Face signatures, including size, shape, texture, skin tone, eye color, appearance, and scars/marks, are widely used as discriminative, biometric information for access control. Despite recent advancements in facial recognition systems, presentation attacks on facial recognition systems have become increasingly sophisticated. The ability to detect presentation attacks or spoofing attempts is a pressing concern for the integrity, security, and trust of facial recognition systems. Multi-spectral imaging has been previously introduced as a way to improve presentation attack detection by utilizing sensors that are sensitive to different regions of the electromagnetic spectrum (e.g., visible, near infrared, long-wave infrared). Although multi-spectral presentation attack detection systems may be discriminative, the need for additional sensors and computational resources substantially increases complexity and costs. Instead, we propose a method that exploits information from infrared imagery during training to increase the discriminability of visible-based presentation attack detection systems. We introduce (1) a new cross-domain presentation attack detection framework that increases the separability of bonafide and presentation attacks using only visible spectrum imagery, (2) an inverse domain regularization technique for added training stability when optimizing our cross-domain presentation attack detection framework, and (3) a dense domain adaptation subnetwork to transform representations between visible and non-visible domains.
Adviser: Benjamin Rigga
Regularized Fine-grained Meta Face Anti-spoofing
Face presentation attacks have become an increasingly critical concern when
face recognition is widely applied. Many face anti-spoofing methods have been
proposed, but most of them ignore the generalization ability to unseen attacks.
To overcome the limitation, this work casts face anti-spoofing as a domain
generalization (DG) problem, and attempts to address this problem by developing
a new meta-learning framework called Regularized Fine-grained Meta-learning. To
let our face anti-spoofing model generalize well to unseen attacks, the
proposed framework trains our model to perform well in the simulated domain
shift scenarios, which is achieved by finding generalized learning directions
in the meta-learning process. Specifically, the proposed framework incorporates
the domain knowledge of face anti-spoofing as the regularization so that
meta-learning is conducted in the feature space regularized by the supervision
of domain knowledge. This enables our model more likely to find generalized
learning directions with the regularized meta-learning for face anti-spoofing
task. Besides, to further enhance the generalization ability of our model, the
proposed framework adopts a fine-grained learning strategy that simultaneously
conducts meta-learning in a variety of domain shift scenarios in each
iteration. Extensive experiments on four public datasets validate the
effectiveness of the proposed method.Comment: Accepted by AAAI 2020. Codes are available at
https://github.com/rshaojimmy/AAAI2020-RFMetaFA
Learning One Class Representations for Face Presentation Attack Detection using Multi-channel Convolutional Neural Networks
Face recognition has evolved as a widely used biometric modality. However,
its vulnerability against presentation attacks poses a significant security
threat. Though presentation attack detection (PAD) methods try to address this
issue, they often fail in generalizing to unseen attacks. In this work, we
propose a new framework for PAD using a one-class classifier, where the
representation used is learned with a Multi-Channel Convolutional Neural
Network (MCCNN). A novel loss function is introduced, which forces the network
to learn a compact embedding for bonafide class while being far from the
representation of attacks. A one-class Gaussian Mixture Model is used on top of
these embeddings for the PAD task. The proposed framework introduces a novel
approach to learn a robust PAD system from bonafide and available (known)
attack classes. This is particularly important as collecting bonafide data and
simpler attacks are much easier than collecting a wide variety of expensive
attacks. The proposed system is evaluated on the publicly available WMCA
multi-channel face PAD database, which contains a wide variety of 2D and 3D
attacks. Further, we have performed experiments with MLFP and SiW-M datasets
using RGB channels only. Superior performance in unseen attack protocols shows
the effectiveness of the proposed approach. Software, data, and protocols to
reproduce the results are made available publicly.Comment: 15 page
Self-Domain Adaptation for Face Anti-Spoofing
Although current face anti-spoofing methods achieve promising results under
intra-dataset testing, they suffer from poor generalization to unseen attacks.
Most existing works adopt domain adaptation (DA) or domain generalization (DG)
techniques to address this problem. However, the target domain is often unknown
during training which limits the utilization of DA methods. DG methods can
conquer this by learning domain invariant features without seeing any target
data. However, they fail in utilizing the information of target data. In this
paper, we propose a self-domain adaptation framework to leverage the unlabeled
test domain data at inference. Specifically, a domain adaptor is designed to
adapt the model for test domain. In order to learn a better adaptor, a
meta-learning based adaptor learning algorithm is proposed using the data of
multiple source domains at the training step. At test time, the adaptor is
updated using only the test domain data according to the proposed unsupervised
adaptor loss to further improve the performance. Extensive experiments on four
public datasets validate the effectiveness of the proposed method.Comment: Camera Ready, AAAI 202