1,234 research outputs found

    Enabling Fast and Universal Audio Adversarial Attack Using Generative Model

    Full text link
    Recently, the vulnerability of DNN-based audio systems to adversarial attacks has obtained the increasing attention. However, the existing audio adversarial attacks allow the adversary to possess the entire user's audio input as well as granting sufficient time budget to generate the adversarial perturbations. These idealized assumptions, however, makes the existing audio adversarial attacks mostly impossible to be launched in a timely fashion in practice (e.g., playing unnoticeable adversarial perturbations along with user's streaming input). To overcome these limitations, in this paper we propose fast audio adversarial perturbation generator (FAPG), which uses generative model to generate adversarial perturbations for the audio input in a single forward pass, thereby drastically improving the perturbation generation speed. Built on the top of FAPG, we further propose universal audio adversarial perturbation generator (UAPG), a scheme crafting universal adversarial perturbation that can be imposed on arbitrary benign audio input to cause misclassification. Extensive experiments show that our proposed FAPG can achieve up to 167X speedup over the state-of-the-art audio adversarial attack methods. Also our proposed UAPG can generate universal adversarial perturbation that achieves much better attack performance than the state-of-the-art solutions.Comment: Publish on AAAI2

    Universal Adversarial Perturbations for Speech Recognition Systems

    Get PDF
    In this work, we demonstrate the existence of universal adversarial audio perturbations that cause mis-transcription of audio signals by automatic speech recognition (ASR) systems. We propose an algorithm to find a single quasi-imperceptible perturbation, which when added to any arbitrary speech signal, will most likely fool the victim speech recognition model. Our experiments demonstrate the application of our proposed technique by crafting audio-agnostic universal perturbations for the state-of-the-art ASR system -- Mozilla DeepSpeech. Additionally, we show that such perturbations generalize to a significant extent across models that are not available during training, by performing a transferability test on a WaveNet based ASR system.Comment: Published as a conference paper at INTERSPEECH 201

    On the human evaluation of universal audio adversarial perturbations

    Get PDF
    [EN] Human-machine interaction is increasingly dependent on speech communication, mainly due to the remarkable performance of Machine Learning models in speech recognition tasks. However, these models can be fooled by adversarial examples, which are inputs in-tentionally perturbed to produce a wrong prediction without the changes being noticeable to humans. While much research has focused on developing new techniques to generate adversarial perturbations, less attention has been given to aspects that determine whether and how the perturbations are noticed by humans. This question is relevant since high fool-ing rates of proposed adversarial perturbation strategies are only valuable if the perturba-tions are not detectable. In this paper we investigate to which extent the distortion metrics proposed in the literature for audio adversarial examples, and which are commonly applied to evaluate the effectiveness of methods for generating these attacks, are a reliable mea-sure of the human perception of the perturbations. Using an analytical framework, and an experiment in which 36 subjects evaluate audio adversarial examples according to different factors, we demonstrate that the metrics employed by convention are not a reliable measure of the perceptual similarity of adversarial examples in the audio domain.This work was supported by the Basque Government (PRE_2019_1_0128 predoctoral grant, IT1244-19 and project KK-2020/00049 through the ELKARTEK program); the Spanish Ministry of Economy and Competitiveness MINECO (projects TIN2016-78365-R and PID2019-104966GB-I00); and the Spanish Ministry of Science, Innovation and Universities (FPU19/03231 predoctoral grant). The authors would also like to thank to the Intelligent Systems Group (University of the Basque Country UPV/EHU, Spain) for providing the computational resources needed to develop the project, as well as to all the participants that took part in the experiments

    Analysis of Dominant Classes in Universal Adversarial Perturbations

    Get PDF
    The reasons why Deep Neural Networks are susceptible to being fooled by adversarial examples remains an open discussion. Indeed, many differ- ent strategies can be employed to efficiently generate adversarial attacks, some of them relying on different theoretical justifications. Among these strategies, universal (input-agnostic) perturbations are of particular inter- est, due to their capability to fool a network independently of the input in which the perturbation is applied. In this work, we investigate an in- triguing phenomenon of universal perturbations, which has been reported previously in the literature, yet without a proven justification: universal perturbations change the predicted classes for most inputs into one par- ticular (dominant) class, even if this behavior is not specified during the creation of the perturbation. In order to justify the cause of this phe- nomenon, we propose a number of hypotheses and experimentally test them using a speech command classification problem in the audio domain as a testbed. Our analyses reveal interesting properties of universal per- turbations, suggest new methods to generate such attacks and provide an explanation of dominant classes, under both a geometric and a data- feature perspective.IT1244-19 PRE_2019_1_0128 TIN2016-78365-R PID2019-104966GB-I00 FPU19/0323
    corecore