403 research outputs found
Parsimonious Black-Box Adversarial Attacks via Efficient Combinatorial Optimization
Solving for adversarial examples with projected gradient descent has been
demonstrated to be highly effective in fooling the neural network based
classifiers. However, in the black-box setting, the attacker is limited only to
the query access to the network and solving for a successful adversarial
example becomes much more difficult. To this end, recent methods aim at
estimating the true gradient signal based on the input queries but at the cost
of excessive queries. We propose an efficient discrete surrogate to the
optimization problem which does not require estimating the gradient and
consequently becomes free of the first order update hyperparameters to tune.
Our experiments on Cifar-10 and ImageNet show the state of the art black-box
attack performance with significant reduction in the required queries compared
to a number of recently proposed methods. The source code is available at
https://github.com/snu-mllab/parsimonious-blackbox-attack.Comment: Accepted and to appear at ICML 201
When Are Linear Stochastic Bandits Attackable?
We study adversarial attacks on linear stochastic bandits: by manipulating
the rewards, an adversary aims to control the behaviour of the bandit
algorithm. Perhaps surprisingly, we first show that some attack goals can never
be achieved. This is in sharp contrast to context-free stochastic bandits, and
is intrinsically due to the correlation among arms in linear stochastic
bandits. Motivated by this finding, this paper studies the attackability of a
-armed linear bandit environment. We first provide a complete necessity and
sufficiency characterization of attackability based on the geometry of the
arms' context vectors. We then propose a two-stage attack method against LinUCB
and Robust Phase Elimination. The method first asserts whether the given
environment is attackable; and if yes, it poisons the rewards to force the
algorithm to pull a target arm linear times using only a sublinear cost.
Numerical experiments further validate the effectiveness and cost-efficiency of
the proposed attack method.Comment: 27 pages, 3 figures, ICML 202
- …