2,297 research outputs found
MeshAdv: Adversarial Meshes for Visual Recognition
Highly expressive models such as deep neural networks (DNNs) have been widely
applied to various applications. However, recent studies show that DNNs are
vulnerable to adversarial examples, which are carefully crafted inputs aiming
to mislead the predictions. Currently, the majority of these studies have
focused on perturbation added to image pixels, while such manipulation is not
physically realistic. Some works have tried to overcome this limitation by
attaching printable 2D patches or painting patterns onto surfaces, but can be
potentially defended because 3D shape features are intact. In this paper, we
propose meshAdv to generate "adversarial 3D meshes" from objects that have rich
shape features but minimal textural variation. To manipulate the shape or
texture of the objects, we make use of a differentiable renderer to compute
accurate shading on the shape and propagate the gradient. Extensive experiments
show that the generated 3D meshes are effective in attacking both classifiers
and object detectors. We evaluate the attack under different viewpoints. In
addition, we design a pipeline to perform black-box attack on a photorealistic
renderer with unknown rendering parameters.Comment: Published in IEEE CVPR201
Using LIP to Gloss Over Faces in Single-Stage Face Detection Networks
This work shows that it is possible to fool/attack recent state-of-the-art
face detectors which are based on the single-stage networks. Successfully
attacking face detectors could be a serious malware vulnerability when
deploying a smart surveillance system utilizing face detectors. We show that
existing adversarial perturbation methods are not effective to perform such an
attack, especially when there are multiple faces in the input image. This is
because the adversarial perturbation specifically generated for one face may
disrupt the adversarial perturbation for another face. In this paper, we call
this problem the Instance Perturbation Interference (IPI) problem. This IPI
problem is addressed by studying the relationship between the deep neural
network receptive field and the adversarial perturbation. As such, we propose
the Localized Instance Perturbation (LIP) that uses adversarial perturbation
constrained to the Effective Receptive Field (ERF) of a target to perform the
attack. Experiment results show the LIP method massively outperforms existing
adversarial perturbation generation methods -- often by a factor of 2 to 10.Comment: to appear ECCV 2018 (accepted version
- …