4,793 research outputs found
Adversarial Edit Attacks for Tree Data
Many machine learning models can be attacked with adversarial examples, i.e.
inputs close to correctly classified examples that are classified incorrectly.
However, most research on adversarial attacks to date is limited to vectorial
data, in particular image data. In this contribution, we extend the field by
introducing adversarial edit attacks for tree-structured data with potential
applications in medicine and automated program analysis. Our approach solely
relies on the tree edit distance and a logarithmic number of black-box queries
to the attacked classifier without any need for gradient information. We
evaluate our approach on two programming and two biomedical data sets and show
that many established tree classifiers, like tree-kernel-SVMs and recursive
neural networks, can be attacked effectively.Comment: accepted at the 20th International Conference on Intelligent Data
Engineering and Automated Learning (IDEAL
Parsimonious Black-Box Adversarial Attacks via Efficient Combinatorial Optimization
Solving for adversarial examples with projected gradient descent has been
demonstrated to be highly effective in fooling the neural network based
classifiers. However, in the black-box setting, the attacker is limited only to
the query access to the network and solving for a successful adversarial
example becomes much more difficult. To this end, recent methods aim at
estimating the true gradient signal based on the input queries but at the cost
of excessive queries. We propose an efficient discrete surrogate to the
optimization problem which does not require estimating the gradient and
consequently becomes free of the first order update hyperparameters to tune.
Our experiments on Cifar-10 and ImageNet show the state of the art black-box
attack performance with significant reduction in the required queries compared
to a number of recently proposed methods. The source code is available at
https://github.com/snu-mllab/parsimonious-blackbox-attack.Comment: Accepted and to appear at ICML 201
- …