215 research outputs found
Using LIP to Gloss Over Faces in Single-Stage Face Detection Networks
This work shows that it is possible to fool/attack recent state-of-the-art
face detectors which are based on the single-stage networks. Successfully
attacking face detectors could be a serious malware vulnerability when
deploying a smart surveillance system utilizing face detectors. We show that
existing adversarial perturbation methods are not effective to perform such an
attack, especially when there are multiple faces in the input image. This is
because the adversarial perturbation specifically generated for one face may
disrupt the adversarial perturbation for another face. In this paper, we call
this problem the Instance Perturbation Interference (IPI) problem. This IPI
problem is addressed by studying the relationship between the deep neural
network receptive field and the adversarial perturbation. As such, we propose
the Localized Instance Perturbation (LIP) that uses adversarial perturbation
constrained to the Effective Receptive Field (ERF) of a target to perform the
attack. Experiment results show the LIP method massively outperforms existing
adversarial perturbation generation methods -- often by a factor of 2 to 10.Comment: to appear ECCV 2018 (accepted version
Semi-Adversarial Networks: Convolutional Autoencoders for Imparting Privacy to Face Images
In this paper, we design and evaluate a convolutional autoencoder that
perturbs an input face image to impart privacy to a subject. Specifically, the
proposed autoencoder transforms an input face image such that the transformed
image can be successfully used for face recognition but not for gender
classification. In order to train this autoencoder, we propose a novel training
scheme, referred to as semi-adversarial training in this work. The training is
facilitated by attaching a semi-adversarial module consisting of a pseudo
gender classifier and a pseudo face matcher to the autoencoder. The objective
function utilized for training this network has three terms: one to ensure that
the perturbed image is a realistic face image; another to ensure that the
gender attributes of the face are confounded; and a third to ensure that
biometric recognition performance due to the perturbed image is not impacted.
Extensive experiments confirm the efficacy of the proposed architecture in
extending gender privacy to face images
Robust Ensemble Morph Detection with Domain Generalization
Although a substantial amount of studies is dedicated to morph detection,
most of them fail to generalize for morph faces outside of their training
paradigm. Moreover, recent morph detection methods are highly vulnerable to
adversarial attacks. In this paper, we intend to learn a morph detection model
with high generalization to a wide range of morphing attacks and high
robustness against different adversarial attacks. To this aim, we develop an
ensemble of convolutional neural networks (CNNs) and Transformer models to
benefit from their capabilities simultaneously. To improve the robust accuracy
of the ensemble model, we employ multi-perturbation adversarial training and
generate adversarial examples with high transferability for several single
models. Our exhaustive evaluations demonstrate that the proposed robust
ensemble model generalizes to several morphing attacks and face datasets. In
addition, we validate that our robust ensemble model gain better robustness
against several adversarial attacks while outperforming the state-of-the-art
studies.Comment: Accepted in IJCB 202
Trace and detect adversarial attacks on CNNs using feature response maps
The existence of adversarial attacks on convolutional neural networks (CNN) questions the fitness of such models for serious applications. The attacks manipulate an input image such that misclassification is evoked while still looking normal to a human observer – they are thus not easily detectable. In a different context, backpropagated activations of CNN hidden layers – “feature responses” to a given input – have been helpful to visualize for a human “debugger” what the CNN “looks at” while computing its output. In this work, we propose a novel detection method for adversarial examples to prevent attacks. We do so by tracking adversarial perturbations in feature responses, allowing for automatic detection using average local spatial entropy. The method does not alter the original network architecture and is fully human-interpretable. Experiments confirm the validity of our approach for state-of-the-art attacks on large-scale models trained on ImageNet
Handbook of Digital Face Manipulation and Detection
This open access book provides the first comprehensive collection of studies dealing with the hot topic of digital face manipulation such as DeepFakes, Face Morphing, or Reenactment. It combines the research fields of biometrics and media forensics including contributions from academia and industry. Appealing to a broad readership, introductory chapters provide a comprehensive overview of the topic, which address readers wishing to gain a brief overview of the state-of-the-art. Subsequent chapters, which delve deeper into various research challenges, are oriented towards advanced readers. Moreover, the book provides a good starting point for young researchers as well as a reference guide pointing at further literature. Hence, the primary readership is academic institutions and industry currently involved in digital face manipulation and detection. The book could easily be used as a recommended text for courses in image processing, machine learning, media forensics, biometrics, and the general security area
Handbook of Digital Face Manipulation and Detection
This open access book provides the first comprehensive collection of studies dealing with the hot topic of digital face manipulation such as DeepFakes, Face Morphing, or Reenactment. It combines the research fields of biometrics and media forensics including contributions from academia and industry. Appealing to a broad readership, introductory chapters provide a comprehensive overview of the topic, which address readers wishing to gain a brief overview of the state-of-the-art. Subsequent chapters, which delve deeper into various research challenges, are oriented towards advanced readers. Moreover, the book provides a good starting point for young researchers as well as a reference guide pointing at further literature. Hence, the primary readership is academic institutions and industry currently involved in digital face manipulation and detection. The book could easily be used as a recommended text for courses in image processing, machine learning, media forensics, biometrics, and the general security area
Deep Learning based Fingerprint Presentation Attack Detection: A Comprehensive Survey
The vulnerabilities of fingerprint authentication systems have raised
security concerns when adapting them to highly secure access-control
applications. Therefore, Fingerprint Presentation Attack Detection (FPAD)
methods are essential for ensuring reliable fingerprint authentication. Owing
to the lack of generation capacity of traditional handcrafted based approaches,
deep learning-based FPAD has become mainstream and has achieved remarkable
performance in the past decade. Existing reviews have focused more on
hand-cratfed rather than deep learning-based methods, which are outdated. To
stimulate future research, we will concentrate only on recent
deep-learning-based FPAD methods. In this paper, we first briefly introduce the
most common Presentation Attack Instruments (PAIs) and publicly available
fingerprint Presentation Attack (PA) datasets. We then describe the existing
deep-learning FPAD by categorizing them into contact, contactless, and
smartphone-based approaches. Finally, we conclude the paper by discussing the
open challenges at the current stage and emphasizing the potential future
perspective.Comment: 29 pages, submitted to ACM computing survey journa
- …