25,369 research outputs found
Introducing Competition to Boost the Transferability of Targeted Adversarial Examples through Clean Feature Mixup
Deep neural networks are widely known to be susceptible to adversarial
examples, which can cause incorrect predictions through subtle input
modifications. These adversarial examples tend to be transferable between
models, but targeted attacks still have lower attack success rates due to
significant variations in decision boundaries. To enhance the transferability
of targeted adversarial examples, we propose introducing competition into the
optimization process. Our idea is to craft adversarial perturbations in the
presence of two new types of competitor noises: adversarial perturbations
towards different target classes and friendly perturbations towards the correct
class. With these competitors, even if an adversarial example deceives a
network to extract specific features leading to the target class, this
disturbance can be suppressed by other competitors. Therefore, within this
competition, adversarial examples should take different attack strategies by
leveraging more diverse features to overwhelm their interference, leading to
improving their transferability to different models. Considering the
computational complexity, we efficiently simulate various interference from
these two types of competitors in feature space by randomly mixing up stored
clean features in the model inference and named this method Clean Feature Mixup
(CFM). Our extensive experimental results on the ImageNet-Compatible and
CIFAR-10 datasets show that the proposed method outperforms the existing
baselines with a clear margin. Our code is available at
https://github.com/dreamflake/CFM.Comment: CVPR 2023 camera-read
On Adversarial Examples and Stealth Attacks in Artificial Intelligence Systems
In this work we present a formal theoretical framework for assessing and
analyzing two classes of malevolent action towards generic Artificial
Intelligence (AI) systems. Our results apply to general multi-class classifiers
that map from an input space into a decision space, including artificial neural
networks used in deep learning applications. Two classes of attacks are
considered. The first class involves adversarial examples and concerns the
introduction of small perturbations of the input data that cause
misclassification. The second class, introduced here for the first time and
named stealth attacks, involves small perturbations to the AI system itself.
Here the perturbed system produces whatever output is desired by the attacker
on a specific small data set, perhaps even a single input, but performs as
normal on a validation set (which is unknown to the attacker). We show that in
both cases, i.e., in the case of an attack based on adversarial examples and in
the case of a stealth attack, the dimensionality of the AI's decision-making
space is a major contributor to the AI's susceptibility. For attacks based on
adversarial examples, a second crucial parameter is the absence of local
concentrations in the data probability distribution, a property known as
Smeared Absolute Continuity. According to our findings, robustness to
adversarial examples requires either (a) the data distributions in the AI's
feature space to have concentrated probability density functions or (b) the
dimensionality of the AI's decision variables to be sufficiently small. We also
show how to construct stealth attacks on high-dimensional AI systems that are
hard to spot unless the validation set is made exponentially large
The Odds are Odd: A Statistical Test for Detecting Adversarial Examples
We investigate conditions under which test statistics exist that can reliably
detect examples, which have been adversarially manipulated in a white-box
attack. These statistics can be easily computed and calibrated by randomly
corrupting inputs. They exploit certain anomalies that adversarial attacks
introduce, in particular if they follow the paradigm of choosing perturbations
optimally under p-norm constraints. Access to the log-odds is the only
requirement to defend models. We justify our approach empirically, but also
provide conditions under which detectability via the suggested test statistics
is guaranteed to be effective. In our experiments, we show that it is even
possible to correct test time predictions for adversarial attacks with high
accuracy
- …