148,232 research outputs found
Why Do Adversarial Attacks Transfer? Explaining Transferability of Evasion and Poisoning Attacks
Transferability captures the ability of an attack against a machine-learning
model to be effective against a different, potentially unknown, model.
Empirical evidence for transferability has been shown in previous work, but the
underlying reasons why an attack transfers or not are not yet well understood.
In this paper, we present a comprehensive analysis aimed to investigate the
transferability of both test-time evasion and training-time poisoning attacks.
We provide a unifying optimization framework for evasion and poisoning attacks,
and a formal definition of transferability of such attacks. We highlight two
main factors contributing to attack transferability: the intrinsic adversarial
vulnerability of the target model, and the complexity of the surrogate model
used to optimize the attack. Based on these insights, we define three metrics
that impact an attack's transferability. Interestingly, our results derived
from theoretical analysis hold for both evasion and poisoning attacks, and are
confirmed experimentally using a wide range of linear and non-linear
classifiers and datasets
Data Poisoning Attacks in Contextual Bandits
We study offline data poisoning attacks in contextual bandits, a class of
reinforcement learning problems with important applications in online
recommendation and adaptive medical treatment, among others. We provide a
general attack framework based on convex optimization and show that by slightly
manipulating rewards in the data, an attacker can force the bandit algorithm to
pull a target arm for a target contextual vector. The target arm and target
contextual vector are both chosen by the attacker. That is, the attacker can
hijack the behavior of a contextual bandit. We also investigate the feasibility
and the side effects of such attacks, and identify future directions for
defense. Experiments on both synthetic and real-world data demonstrate the
efficiency of the attack algorithm.Comment: GameSec 201
The Curse of Concentration in Robust Learning: Evasion and Poisoning Attacks from Concentration of Measure
Many modern machine learning classifiers are shown to be vulnerable to
adversarial perturbations of the instances. Despite a massive amount of work
focusing on making classifiers robust, the task seems quite challenging. In
this work, through a theoretical study, we investigate the adversarial risk and
robustness of classifiers and draw a connection to the well-known phenomenon of
concentration of measure in metric measure spaces. We show that if the metric
probability space of the test instance is concentrated, any classifier with
some initial constant error is inherently vulnerable to adversarial
perturbations.
One class of concentrated metric probability spaces are the so-called Levy
families that include many natural distributions. In this special case, our
attacks only need to perturb the test instance by at most to make
it misclassified, where is the data dimension. Using our general result
about Levy instance spaces, we first recover as special case some of the
previously proved results about the existence of adversarial examples. However,
many more Levy families are known (e.g., product distribution under the Hamming
distance) for which we immediately obtain new attacks that find adversarial
examples of distance .
Finally, we show that concentration of measure for product spaces implies the
existence of forms of "poisoning" attacks in which the adversary tampers with
the training data with the goal of degrading the classifier. In particular, we
show that for any learning algorithm that uses training examples, there is
an adversary who can increase the probability of any "bad property" (e.g.,
failing on a particular test instance) that initially happens with
non-negligible probability to by substituting only of the examples with other (still correctly labeled) examples
- …
