464 research outputs found
Classification of conformal minimal immersions from to with parallel second fundamental form
In this paper, we determine all conformal minimal immersions of 2-spheres in
complex Grassmann manifold with parallel second
fundamental form.Comment: 28 page
MeshAdv: Adversarial Meshes for Visual Recognition
Highly expressive models such as deep neural networks (DNNs) have been widely
applied to various applications. However, recent studies show that DNNs are
vulnerable to adversarial examples, which are carefully crafted inputs aiming
to mislead the predictions. Currently, the majority of these studies have
focused on perturbation added to image pixels, while such manipulation is not
physically realistic. Some works have tried to overcome this limitation by
attaching printable 2D patches or painting patterns onto surfaces, but can be
potentially defended because 3D shape features are intact. In this paper, we
propose meshAdv to generate "adversarial 3D meshes" from objects that have rich
shape features but minimal textural variation. To manipulate the shape or
texture of the objects, we make use of a differentiable renderer to compute
accurate shading on the shape and propagate the gradient. Extensive experiments
show that the generated 3D meshes are effective in attacking both classifiers
and object detectors. We evaluate the attack under different viewpoints. In
addition, we design a pipeline to perform black-box attack on a photorealistic
renderer with unknown rendering parameters.Comment: Published in IEEE CVPR201
Generating Adversarial Examples with Adversarial Networks
Deep neural networks (DNNs) have been found to be vulnerable to adversarial
examples resulting from adding small-magnitude perturbations to inputs. Such
adversarial examples can mislead DNNs to produce adversary-selected results.
Different attack strategies have been proposed to generate adversarial
examples, but how to produce them with high perceptual quality and more
efficiently requires more research efforts. In this paper, we propose AdvGAN to
generate adversarial examples with generative adversarial networks (GANs),
which can learn and approximate the distribution of original instances. For
AdvGAN, once the generator is trained, it can generate adversarial
perturbations efficiently for any instance, so as to potentially accelerate
adversarial training as defenses. We apply AdvGAN in both semi-whitebox and
black-box attack settings. In semi-whitebox attacks, there is no need to access
the original target model after the generator is trained, in contrast to
traditional white-box attacks. In black-box attacks, we dynamically train a
distilled model for the black-box model and optimize the generator accordingly.
Adversarial examples generated by AdvGAN on different target models have high
attack success rate under state-of-the-art defenses compared to other attacks.
Our attack has placed the first with 92.76% accuracy on a public MNIST
black-box attack challenge.Comment: Accepted to IJCAI201
- …