423 research outputs found
Beyond Boundaries: A Comprehensive Survey of Transferable Attacks on AI Systems
Artificial Intelligence (AI) systems such as autonomous vehicles, facial
recognition, and speech recognition systems are increasingly integrated into
our daily lives. However, despite their utility, these AI systems are
vulnerable to a wide range of attacks such as adversarial, backdoor, data
poisoning, membership inference, model inversion, and model stealing attacks.
In particular, numerous attacks are designed to target a particular model or
system, yet their effects can spread to additional targets, referred to as
transferable attacks. Although considerable efforts have been directed toward
developing transferable attacks, a holistic understanding of the advancements
in transferable attacks remains elusive. In this paper, we comprehensively
explore learning-based attacks from the perspective of transferability,
particularly within the context of cyber-physical security. We delve into
different domains -- the image, text, graph, audio, and video domains -- to
highlight the ubiquitous and pervasive nature of transferable attacks. This
paper categorizes and reviews the architecture of existing attacks from various
viewpoints: data, process, model, and system. We further examine the
implications of transferable attacks in practical scenarios such as autonomous
driving, speech recognition, and large language models (LLMs). Additionally, we
outline the potential research directions to encourage efforts in exploring the
landscape of transferable attacks. This survey offers a holistic understanding
of the prevailing transferable attacks and their impacts across different
domains
Enabling Fast and Universal Audio Adversarial Attack Using Generative Model
Recently, the vulnerability of DNN-based audio systems to adversarial attacks
has obtained the increasing attention. However, the existing audio adversarial
attacks allow the adversary to possess the entire user's audio input as well as
granting sufficient time budget to generate the adversarial perturbations.
These idealized assumptions, however, makes the existing audio adversarial
attacks mostly impossible to be launched in a timely fashion in practice (e.g.,
playing unnoticeable adversarial perturbations along with user's streaming
input). To overcome these limitations, in this paper we propose fast audio
adversarial perturbation generator (FAPG), which uses generative model to
generate adversarial perturbations for the audio input in a single forward
pass, thereby drastically improving the perturbation generation speed. Built on
the top of FAPG, we further propose universal audio adversarial perturbation
generator (UAPG), a scheme crafting universal adversarial perturbation that can
be imposed on arbitrary benign audio input to cause misclassification.
Extensive experiments show that our proposed FAPG can achieve up to 167X
speedup over the state-of-the-art audio adversarial attack methods. Also our
proposed UAPG can generate universal adversarial perturbation that achieves
much better attack performance than the state-of-the-art solutions.Comment: Publish on AAAI2
Universal Adversarial Perturbations for Speech Recognition Systems
In this work, we demonstrate the existence of universal adversarial audio
perturbations that cause mis-transcription of audio signals by automatic speech
recognition (ASR) systems. We propose an algorithm to find a single
quasi-imperceptible perturbation, which when added to any arbitrary speech
signal, will most likely fool the victim speech recognition model. Our
experiments demonstrate the application of our proposed technique by crafting
audio-agnostic universal perturbations for the state-of-the-art ASR system --
Mozilla DeepSpeech. Additionally, we show that such perturbations generalize to
a significant extent across models that are not available during training, by
performing a transferability test on a WaveNet based ASR system.Comment: Published as a conference paper at INTERSPEECH 201
- …