357 research outputs found
A Review on Face Anti-Spoofing
The biometric system is a security technology that uses information based on a living person's characteristics to verify or recognize the identity, such as facial recognition. Face recognition has numerous applications in the real world, such as access control and surveillance. But face recognition has a security issue of spoofing. A face anti-spoofing, a task to prevent fake authorization by breaching the face recognition systems using a photo, video, mask, or a different substitute for an authorized person's face, is used to overcome this challenge. There is also increasing research of new datasets by providing new types of attack or diversity to reach a better generalization. This paper review of the recent development includes a general understanding of face spoofing, anti-spoofing methods, and the latest development to solve the problem against various spoof types
How to Construct Perfect and Worse-than-Coin-Flip Spoofing Countermeasures: A Word of Warning on Shortcut Learning
Shortcut learning, or `Clever Hans effect` refers to situations where a
learning agent (e.g., deep neural networks) learns spurious correlations
present in data, resulting in biased models. We focus on finding shortcuts in
deep learning based spoofing countermeasures (CMs) that predict whether a given
utterance is spoofed or not. While prior work has addressed specific data
artifacts, such as silence, no general normative framework has been explored
for analyzing shortcut learning in CMs. In this study, we propose a generic
approach to identifying shortcuts by introducing systematic interventions on
the training and test sides, including the boundary cases of `near-perfect` and
`worse than coin flip` (label flip). By using three different models, ranging
from classic to state-of-the-art, we demonstrate the presence of shortcut
learning in five simulated conditions. We analyze the results using a
regression model to understand how biases affect the class-conditional score
statistics.Comment: Interspeech 202
Domain Generalization in Vision: A Survey
Generalization to out-of-distribution (OOD) data is a capability natural to
humans yet challenging for machines to reproduce. This is because most learning
algorithms strongly rely on the i.i.d.~assumption on source/target data, which
is often violated in practice due to domain shift. Domain generalization (DG)
aims to achieve OOD generalization by using only source data for model
learning. Since first introduced in 2011, research in DG has made great
progresses. In particular, intensive research in this topic has led to a broad
spectrum of methodologies, e.g., those based on domain alignment,
meta-learning, data augmentation, or ensemble learning, just to name a few; and
has covered various vision applications such as object recognition,
segmentation, action recognition, and person re-identification. In this paper,
for the first time a comprehensive literature review is provided to summarize
the developments in DG for computer vision over the past decade. Specifically,
we first cover the background by formally defining DG and relating it to other
research fields like domain adaptation and transfer learning. Second, we
conduct a thorough review into existing methods and present a categorization
based on their methodologies and motivations. Finally, we conclude this survey
with insights and discussions on future research directions.Comment: v4: includes the word "vision" in the title; improves the
organization and clarity in Section 2-3; adds future directions; and mor
Deep Learning for Face Anti-Spoofing: A Survey
Face anti-spoofing (FAS) has lately attracted increasing attention due to its
vital role in securing face recognition systems from presentation attacks
(PAs). As more and more realistic PAs with novel types spring up, traditional
FAS methods based on handcrafted features become unreliable due to their
limited representation capacity. With the emergence of large-scale academic
datasets in the recent decade, deep learning based FAS achieves remarkable
performance and dominates this area. However, existing reviews in this field
mainly focus on the handcrafted features, which are outdated and uninspiring
for the progress of FAS community. In this paper, to stimulate future research,
we present the first comprehensive review of recent advances in deep learning
based FAS. It covers several novel and insightful components: 1) besides
supervision with binary label (e.g., '0' for bonafide vs. '1' for PAs), we also
investigate recent methods with pixel-wise supervision (e.g., pseudo depth
map); 2) in addition to traditional intra-dataset evaluation, we collect and
analyze the latest methods specially designed for domain generalization and
open-set FAS; and 3) besides commercial RGB camera, we summarize the deep
learning applications under multi-modal (e.g., depth and infrared) or
specialized (e.g., light field and flash) sensors. We conclude this survey by
emphasizing current open issues and highlighting potential prospects.Comment: IEEE Transactions on Pattern Analysis and Machine Intelligence
(TPAMI
Face Liveness Detection under Processed Image Attacks
Face recognition is a mature and reliable technology for identifying people. Due
to high-definition cameras and supporting devices, it is considered the fastest and
the least intrusive biometric recognition modality. Nevertheless, effective spoofing
attempts on face recognition systems were found to be possible. As a result, various anti-spoofing algorithms were developed to counteract these attacks. They are
commonly referred in the literature a liveness detection tests. In this research we highlight the effectiveness of some simple, direct spoofing attacks, and test one of
the current robust liveness detection algorithms, i.e. the logistic regression based face liveness detection from a single image, proposed by the Tan et al. in 2010, against malicious attacks using processed imposter images. In particular, we study experimentally the effect of common image processing operations such as sharpening and smoothing, as well as corruption with salt and pepper noise, on the face liveness detection algorithm, and we find that it is especially vulnerable against spoofing attempts using processed imposter images. We design and present a new facial database, the Durham Face Database, which is the first, to the best of our knowledge, to have client, imposter as well as processed imposter images. Finally, we evaluate our claim on the effectiveness of proposed imposter image attacks using transfer learning on Convolutional Neural Networks. We verify that such attacks are more difficult to detect even when using high-end, expensive machine learning techniques
- …