2,287 research outputs found
Isometric 3D Adversarial Examples in the Physical World
3D deep learning models are shown to be as vulnerable to adversarial examples
as 2D models. However, existing attack methods are still far from stealthy and
suffer from severe performance degradation in the physical world. Although 3D
data is highly structured, it is difficult to bound the perturbations with
simple metrics in the Euclidean space. In this paper, we propose a novel
-isometric (-ISO) attack to generate natural and robust 3D
adversarial examples in the physical world by considering the geometric
properties of 3D objects and the invariance to physical transformations. For
naturalness, we constrain the adversarial example to be -isometric to
the original one by adopting the Gaussian curvature as a surrogate metric
guaranteed by a theoretical analysis. For invariance to physical
transformations, we propose a maxima over transformation (MaxOT) method that
actively searches for the most harmful transformations rather than random ones
to make the generated adversarial example more robust in the physical world.
Experiments on typical point cloud recognition models validate that our
approach can significantly improve the attack success rate and naturalness of
the generated 3D adversarial examples than the state-of-the-art attack methods.Comment: NeurIPS 202
ST-GAN: Spatial Transformer Generative Adversarial Networks for Image Compositing
We address the problem of finding realistic geometric corrections to a
foreground object such that it appears natural when composited into a
background image. To achieve this, we propose a novel Generative Adversarial
Network (GAN) architecture that utilizes Spatial Transformer Networks (STNs) as
the generator, which we call Spatial Transformer GANs (ST-GANs). ST-GANs seek
image realism by operating in the geometric warp parameter space. In
particular, we exploit an iterative STN warping scheme and propose a sequential
training strategy that achieves better results compared to naive training of a
single generator. One of the key advantages of ST-GAN is its applicability to
high-resolution images indirectly since the predicted warp parameters are
transferable between reference frames. We demonstrate our approach in two
applications: (1) visualizing how indoor furniture (e.g. from product images)
might be perceived in a room, (2) hallucinating how accessories like glasses
would look when matched with real portraits.Comment: Accepted to CVPR 2018 (website & code:
https://chenhsuanlin.bitbucket.io/spatial-transformer-GAN/
MeshAdv: Adversarial Meshes for Visual Recognition
Highly expressive models such as deep neural networks (DNNs) have been widely
applied to various applications. However, recent studies show that DNNs are
vulnerable to adversarial examples, which are carefully crafted inputs aiming
to mislead the predictions. Currently, the majority of these studies have
focused on perturbation added to image pixels, while such manipulation is not
physically realistic. Some works have tried to overcome this limitation by
attaching printable 2D patches or painting patterns onto surfaces, but can be
potentially defended because 3D shape features are intact. In this paper, we
propose meshAdv to generate "adversarial 3D meshes" from objects that have rich
shape features but minimal textural variation. To manipulate the shape or
texture of the objects, we make use of a differentiable renderer to compute
accurate shading on the shape and propagate the gradient. Extensive experiments
show that the generated 3D meshes are effective in attacking both classifiers
and object detectors. We evaluate the attack under different viewpoints. In
addition, we design a pipeline to perform black-box attack on a photorealistic
renderer with unknown rendering parameters.Comment: Published in IEEE CVPR201
- …