56,386 research outputs found
Robust Audio Adversarial Example for a Physical Attack
We propose a method to generate audio adversarial examples that can attack a
state-of-the-art speech recognition model in the physical world. Previous work
assumes that generated adversarial examples are directly fed to the recognition
model, and is not able to perform such a physical attack because of
reverberation and noise from playback environments. In contrast, our method
obtains robust adversarial examples by simulating transformations caused by
playback or recording in the physical world and incorporating the
transformations into the generation process. Evaluation and a listening
experiment demonstrated that our adversarial examples are able to attack
without being noticed by humans. This result suggests that audio adversarial
examples generated by the proposed method may become a real threat.Comment: Accepted to IJCAI 201
Adversarial Examples in the Physical World: A Survey
Deep neural networks (DNNs) have demonstrated high vulnerability to
adversarial examples. Besides the attacks in the digital world, the practical
implications of adversarial examples in the physical world present significant
challenges and safety concerns. However, current research on physical
adversarial examples (PAEs) lacks a comprehensive understanding of their unique
characteristics, leading to limited significance and understanding. In this
paper, we address this gap by thoroughly examining the characteristics of PAEs
within a practical workflow encompassing training, manufacturing, and
re-sampling processes. By analyzing the links between physical adversarial
attacks, we identify manufacturing and re-sampling as the primary sources of
distinct attributes and particularities in PAEs. Leveraging this knowledge, we
develop a comprehensive analysis and classification framework for PAEs based on
their specific characteristics, covering over 100 studies on physical-world
adversarial examples. Furthermore, we investigate defense strategies against
PAEs and identify open challenges and opportunities for future research. We aim
to provide a fresh, thorough, and systematic understanding of PAEs, thereby
promoting the development of robust adversarial learning and its application in
open-world scenarios.Comment: Adversarial examples, physical-world scenarios, attacks and defense
Isometric 3D Adversarial Examples in the Physical World
3D deep learning models are shown to be as vulnerable to adversarial examples
as 2D models. However, existing attack methods are still far from stealthy and
suffer from severe performance degradation in the physical world. Although 3D
data is highly structured, it is difficult to bound the perturbations with
simple metrics in the Euclidean space. In this paper, we propose a novel
-isometric (-ISO) attack to generate natural and robust 3D
adversarial examples in the physical world by considering the geometric
properties of 3D objects and the invariance to physical transformations. For
naturalness, we constrain the adversarial example to be -isometric to
the original one by adopting the Gaussian curvature as a surrogate metric
guaranteed by a theoretical analysis. For invariance to physical
transformations, we propose a maxima over transformation (MaxOT) method that
actively searches for the most harmful transformations rather than random ones
to make the generated adversarial example more robust in the physical world.
Experiments on typical point cloud recognition models validate that our
approach can significantly improve the attack success rate and naturalness of
the generated 3D adversarial examples than the state-of-the-art attack methods.Comment: NeurIPS 202
- …