17,786 research outputs found
On the Robustness of Vision Transformers to Adversarial Examples
Recent advances in attention-based networks have shown that Vision
Transformers can achieve state-of-the-art or near state-of-the-art results on
many image classification tasks. This puts transformers in the unique position
of being a promising alternative to traditional convolutional neural networks
(CNNs). While CNNs have been carefully studied with respect to adversarial
attacks, the same cannot be said of Vision Transformers. In this paper, we
study the robustness of Vision Transformers to adversarial examples. Our
analyses of transformer security is divided into three parts. First, we test
the transformer under standard white-box and black-box attacks. Second, we
study the transferability of adversarial examples between CNNs and
transformers. We show that adversarial examples do not readily transfer between
CNNs and transformers. Based on this finding, we analyze the security of a
simple ensemble defense of CNNs and transformers. By creating a new attack, the
self-attention blended gradient attack, we show that such an ensemble is not
secure under a white-box adversary. However, under a black-box adversary, we
show that an ensemble can achieve unprecedented robustness without sacrificing
clean accuracy. Our analysis for this work is done using six types of white-box
attacks and two types of black-box attacks. Our study encompasses multiple
Vision Transformers, Big Transfer Models and CNN architectures trained on
CIFAR-10, CIFAR-100 and ImageNet
Boosting Transferability of Targeted Adversarial Examples via Hierarchical Generative Networks
Transfer-based adversarial attacks can effectively evaluate model robustness
in the black-box setting. Though several methods have demonstrated impressive
transferability of untargeted adversarial examples, targeted adversarial
transferability is still challenging. The existing methods either have low
targeted transferability or sacrifice computational efficiency. In this paper,
we develop a simple yet practical framework to efficiently craft targeted
transfer-based adversarial examples. Specifically, we propose a conditional
generative attacking model, which can generate the adversarial examples
targeted at different classes by simply altering the class embedding and share
a single backbone. Extensive experiments demonstrate that our method improves
the success rates of targeted black-box attacks by a significant margin over
the existing methods -- it reaches an average success rate of 29.6\% against
six diverse models based only on one substitute white-box model in the standard
testing of NeurIPS 2017 competition, which outperforms the state-of-the-art
gradient-based attack methods (with an average success rate of 2\%) by a
large margin. Moreover, the proposed method is also more efficient beyond an
order of magnitude than gradient-based methods
- …