1,036 research outputs found
Adversarial Black-Box Attacks on Automatic Speech Recognition Systems using Multi-Objective Evolutionary Optimization
Fooling deep neural networks with adversarial input have exposed a
significant vulnerability in the current state-of-the-art systems in multiple
domains. Both black-box and white-box approaches have been used to either
replicate the model itself or to craft examples which cause the model to fail.
In this work, we propose a framework which uses multi-objective evolutionary
optimization to perform both targeted and un-targeted black-box attacks on
Automatic Speech Recognition (ASR) systems. We apply this framework on two ASR
systems: Deepspeech and Kaldi-ASR, which increases the Word Error Rates (WER)
of these systems by upto 980%, indicating the potency of our approach. During
both un-targeted and targeted attacks, the adversarial samples maintain a high
acoustic similarity of 0.98 and 0.97 with the original audio.Comment: Published in Interspeech 201
Privacy-preserving and Privacy-attacking Approaches for Speech and Audio -- A Survey
In contemporary society, voice-controlled devices, such as smartphones and
home assistants, have become pervasive due to their advanced capabilities and
functionality. The always-on nature of their microphones offers users the
convenience of readily accessing these devices. However, recent research and
events have revealed that such voice-controlled devices are prone to various
forms of malicious attacks, hence making it a growing concern for both users
and researchers to safeguard against such attacks. Despite the numerous studies
that have investigated adversarial attacks and privacy preservation for images,
a conclusive study of this nature has not been conducted for the audio domain.
Therefore, this paper aims to examine existing approaches for
privacy-preserving and privacy-attacking strategies for audio and speech. To
achieve this goal, we classify the attack and defense scenarios into several
categories and provide detailed analysis of each approach. We also interpret
the dissimilarities between the various approaches, highlight their
contributions, and examine their limitations. Our investigation reveals that
voice-controlled devices based on neural networks are inherently susceptible to
specific types of attacks. Although it is possible to enhance the robustness of
such models to certain forms of attack, more sophisticated approaches are
required to comprehensively safeguard user privacy
A Survey on Physical Adversarial Attack in Computer Vision
Over the past decade, deep learning has revolutionized conventional tasks
that rely on hand-craft feature extraction with its strong feature learning
capability, leading to substantial enhancements in traditional tasks. However,
deep neural networks (DNNs) have been demonstrated to be vulnerable to
adversarial examples crafted by malicious tiny noise, which is imperceptible to
human observers but can make DNNs output the wrong result. Existing adversarial
attacks can be categorized into digital and physical adversarial attacks. The
former is designed to pursue strong attack performance in lab environments
while hardly remaining effective when applied to the physical world. In
contrast, the latter focus on developing physical deployable attacks, thus
exhibiting more robustness in complex physical environmental conditions.
Recently, with the increasing deployment of the DNN-based system in the real
world, strengthening the robustness of these systems is an emergency, while
exploring physical adversarial attacks exhaustively is the precondition. To
this end, this paper reviews the evolution of physical adversarial attacks
against DNN-based computer vision tasks, expecting to provide beneficial
information for developing stronger physical adversarial attacks. Specifically,
we first proposed a taxonomy to categorize the current physical adversarial
attacks and grouped them. Then, we discuss the existing physical attacks and
focus on the technique for improving the robustness of physical attacks under
complex physical environmental conditions. Finally, we discuss the issues of
the current physical adversarial attacks to be solved and give promising
directions
- …