73,530 research outputs found
Adversarial Light Projection Attacks on Face Recognition Systems: A Feasibility Study
Deep learning-based systems have been shown to be vulnerable to adversarial
attacks in both digital and physical domains. While feasible, digital attacks
have limited applicability in attacking deployed systems, including face
recognition systems, where an adversary typically has access to the input and
not the transmission channel. In such setting, physical attacks that directly
provide a malicious input through the input channel pose a bigger threat. We
investigate the feasibility of conducting real-time physical attacks on face
recognition systems using adversarial light projections. A setup comprising a
commercially available web camera and a projector is used to conduct the
attack. The adversary uses a transformation-invariant adversarial pattern
generation method to generate a digital adversarial pattern using one or more
images of the target available to the adversary. The digital adversarial
pattern is then projected onto the adversary's face in the physical domain to
either impersonate a target (impersonation) or evade recognition (obfuscation).
We conduct preliminary experiments using two open-source and one commercial
face recognition system on a pool of 50 subjects. Our experimental results
demonstrate the vulnerability of face recognition systems to light projection
attacks in both white-box and black-box attack settings.Comment: To appear in the proceedings of the IEEE Computer Vision and Pattern
Recognition (CVPR) Biometrics Workshop 2020 - 9 pages, 8 figure
Disentangling Adversarial Robustness and Generalization
Obtaining deep networks that are robust against adversarial examples and
generalize well is an open problem. A recent hypothesis even states that both
robust and accurate models are impossible, i.e., adversarial robustness and
generalization are conflicting goals. In an effort to clarify the relationship
between robustness and generalization, we assume an underlying, low-dimensional
data manifold and show that: 1. regular adversarial examples leave the
manifold; 2. adversarial examples constrained to the manifold, i.e.,
on-manifold adversarial examples, exist; 3. on-manifold adversarial examples
are generalization errors, and on-manifold adversarial training boosts
generalization; 4. regular robustness and generalization are not necessarily
contradicting goals. These assumptions imply that both robust and accurate
models are possible. However, different models (architectures, training
strategies etc.) can exhibit different robustness and generalization
characteristics. To confirm our claims, we present extensive experiments on
synthetic data (with known manifold) as well as on EMNIST, Fashion-MNIST and
CelebA.Comment: Conference on Computer Vision and Pattern Recognition 201
Wild Patterns: Ten Years After the Rise of Adversarial Machine Learning
Learning-based pattern classifiers, including deep networks, have shown
impressive performance in several application domains, ranging from computer
vision to cybersecurity. However, it has also been shown that adversarial input
perturbations carefully crafted either at training or at test time can easily
subvert their predictions. The vulnerability of machine learning to such wild
patterns (also referred to as adversarial examples), along with the design of
suitable countermeasures, have been investigated in the research field of
adversarial machine learning. In this work, we provide a thorough overview of
the evolution of this research area over the last ten years and beyond,
starting from pioneering, earlier work on the security of non-deep learning
algorithms up to more recent work aimed to understand the security properties
of deep learning algorithms, in the context of computer vision and
cybersecurity tasks. We report interesting connections between these
apparently-different lines of work, highlighting common misconceptions related
to the security evaluation of machine-learning algorithms. We review the main
threat models and attacks defined to this end, and discuss the main limitations
of current work, along with the corresponding future challenges towards the
design of more secure learning algorithms.Comment: Accepted for publication on Pattern Recognition, 201
3D Adversarial Face Targets
The present disclosure relates to adversarial face targets that may be used to test performance of a face recognition system. As such, a plurality of images are received by a system. The plurality of images are processed to detect faces and a set of 3D target faces are synthesized. Further, a set of 2D viewpoint configurations corresponding to each 3D target face of the set of 3D target faces are captured based on a projection function. Adversarial perturbations are generated in relation to each 2D viewpoint configuration of the set of 2D viewpoint configurations. Thereafter, a set of 3D digital adversarial face targets are generated by perturbing an original texture of 3D target face based on the set of 2D viewpoint configurations and the adversarial pattern. The set of adversarial face targets is manufactured using a 3D printer based on the set of 3D digital adversarial face targets and performance of the face recognition system is evaluated using the set of adversarial face targets
3D ADVERSARIAL FACE TARGETS
The present disclosure relates to adversarial face targets that may be used to test performance of a face recognition system. As such, a plurality of images are received by a system. The plurality of images are processed to detect faces and a set of 3D target faces are synthesized. Further, a set of 2D viewpoint configurations corresponding to each 3D target face of the set of 3D target faces are captured based on a projection function. Adversarial perturbations are generated in relation to each 2D viewpoint configuration of the set of 2D viewpoint configurations. Thereafter, a set of 3D digital adversarial face targets are generated by perturbing an original texture of 3D target face based on the set of 2D viewpoint configurations and the adversarial pattern. The set of adversarial face targets is manufactured using a 3D printer based on the set of 3D digital adversarial face targets and performance of the face recognition system is evaluated using the set of adversarial face targets
Disentangling Adversarial Robustness and Generalization
Obtaining deep networks that are robust against adversarial examples and
generalize well is an open problem. A recent hypothesis even states that both
robust and accurate models are impossible, i.e., adversarial robustness and
generalization are conflicting goals. In an effort to clarify the relationship
between robustness and generalization, we assume an underlying, low-dimensional
data manifold and show that: 1. regular adversarial examples leave the
manifold; 2. adversarial examples constrained to the manifold, i.e.,
on-manifold adversarial examples, exist; 3. on-manifold adversarial examples
are generalization errors, and on-manifold adversarial training boosts
generalization; 4. regular robustness and generalization are not necessarily
contradicting goals. These assumptions imply that both robust and accurate
models are possible. However, different models (architectures, training
strategies etc.) can exhibit different robustness and generalization
characteristics. To confirm our claims, we present extensive experiments on
synthetic data (with known manifold) as well as on EMNIST, Fashion-MNIST and
CelebA.Comment: Conference on Computer Vision and Pattern Recognition 201
- …