166 research outputs found
Practical Hidden Voice Attacks against Speech and Speaker Recognition Systems
Voice Processing Systems (VPSes), now widely deployed, have been made
significantly more accurate through the application of recent advances in
machine learning. However, adversarial machine learning has similarly advanced
and has been used to demonstrate that VPSes are vulnerable to the injection of
hidden commands - audio obscured by noise that is correctly recognized by a VPS
but not by human beings. Such attacks, though, are often highly dependent on
white-box knowledge of a specific machine learning model and limited to
specific microphones and speakers, making their use across different acoustic
hardware platforms (and thus their practicality) limited. In this paper, we
break these dependencies and make hidden command attacks more practical through
model-agnostic (blackbox) attacks, which exploit knowledge of the signal
processing algorithms commonly used by VPSes to generate the data fed into
machine learning systems. Specifically, we exploit the fact that multiple
source audio samples have similar feature vectors when transformed by acoustic
feature extraction algorithms (e.g., FFTs). We develop four classes of
perturbations that create unintelligible audio and test them against 12 machine
learning models, including 7 proprietary models (e.g., Google Speech API, Bing
Speech API, IBM Speech API, Azure Speaker API, etc), and demonstrate successful
attacks against all targets. Moreover, we successfully use our maliciously
generated audio samples in multiple hardware configurations, demonstrating
effectiveness across both models and real systems. In so doing, we demonstrate
that domain-specific knowledge of audio signal processing represents a
practical means of generating successful hidden voice command attacks
On the Detection of Adaptive Adversarial Attacks in Speaker Verification Systems
Speaker verification systems have been widely used in smart phones and
Internet of things devices to identify legitimate users. In recent work, it has
been shown that adversarial attacks, such as FAKEBOB, can work effectively
against speaker verification systems. The goal of this paper is to design a
detector that can distinguish an original audio from an audio contaminated by
adversarial attacks. Specifically, our designed detector, called MEH-FEST,
calculates the minimum energy in high frequencies from the short-time Fourier
transform of an audio and uses it as a detection metric. Through both analysis
and experiments, we show that our proposed detector is easy to implement, fast
to process an input audio, and effective in determining whether an audio is
corrupted by FAKEBOB attacks. The experimental results indicate that the
detector is extremely effective: with near zero false positive and false
negative rates for detecting FAKEBOB attacks in Gaussian mixture model (GMM)
and i-vector speaker verification systems. Moreover, adaptive adversarial
attacks against our proposed detector and their countermeasures are discussed
and studied, showing the game between attackers and defenders
Voice Spoofing Countermeasures: Taxonomy, State-of-the-art, experimental analysis of generalizability, open challenges, and the way forward
Malicious actors may seek to use different voice-spoofing attacks to fool ASV
systems and even use them for spreading misinformation. Various countermeasures
have been proposed to detect these spoofing attacks. Due to the extensive work
done on spoofing detection in automated speaker verification (ASV) systems in
the last 6-7 years, there is a need to classify the research and perform
qualitative and quantitative comparisons on state-of-the-art countermeasures.
Additionally, no existing survey paper has reviewed integrated solutions to
voice spoofing evaluation and speaker verification, adversarial/antiforensics
attacks on spoofing countermeasures, and ASV itself, or unified solutions to
detect multiple attacks using a single model. Further, no work has been done to
provide an apples-to-apples comparison of published countermeasures in order to
assess their generalizability by evaluating them across corpora. In this work,
we conduct a review of the literature on spoofing detection using hand-crafted
features, deep learning, end-to-end, and universal spoofing countermeasure
solutions to detect speech synthesis (SS), voice conversion (VC), and replay
attacks. Additionally, we also review integrated solutions to voice spoofing
evaluation and speaker verification, adversarial and anti-forensics attacks on
voice countermeasures, and ASV. The limitations and challenges of the existing
spoofing countermeasures are also presented. We report the performance of these
countermeasures on several datasets and evaluate them across corpora. For the
experiments, we employ the ASVspoof2019 and VSDC datasets along with GMM, SVM,
CNN, and CNN-GRU classifiers. (For reproduceability of the results, the code of
the test bed can be found in our GitHub Repository
V-Cloak: Intelligibility-, Naturalness- & Timbre-Preserving Real-Time Voice Anonymization
Voice data generated on instant messaging or social media applications
contains unique user voiceprints that may be abused by malicious adversaries
for identity inference or identity theft. Existing voice anonymization
techniques, e.g., signal processing and voice conversion/synthesis, suffer from
degradation of perceptual quality. In this paper, we develop a voice
anonymization system, named V-Cloak, which attains real-time voice
anonymization while preserving the intelligibility, naturalness and timbre of
the audio. Our designed anonymizer features a one-shot generative model that
modulates the features of the original audio at different frequency levels. We
train the anonymizer with a carefully-designed loss function. Apart from the
anonymity loss, we further incorporate the intelligibility loss and the
psychoacoustics-based naturalness loss. The anonymizer can realize untargeted
and targeted anonymization to achieve the anonymity goals of unidentifiability
and unlinkability.
We have conducted extensive experiments on four datasets, i.e., LibriSpeech
(English), AISHELL (Chinese), CommonVoice (French) and CommonVoice (Italian),
five Automatic Speaker Verification (ASV) systems (including two DNN-based, two
statistical and one commercial ASV), and eleven Automatic Speech Recognition
(ASR) systems (for different languages). Experiment results confirm that
V-Cloak outperforms five baselines in terms of anonymity performance. We also
demonstrate that V-Cloak trained only on the VoxCeleb1 dataset against
ECAPA-TDNN ASV and DeepSpeech2 ASR has transferable anonymity against other
ASVs and cross-language intelligibility for other ASRs. Furthermore, we verify
the robustness of V-Cloak against various de-noising techniques and adaptive
attacks. Hopefully, V-Cloak may provide a cloak for us in a prism world.Comment: Accepted by USENIX Security Symposium 202
CFAD: A Chinese Dataset for Fake Audio Detection
Fake audio detection is a growing concern and some relevant datasets have
been designed for research. However, there is no standard public Chinese
dataset under complex conditions.In this paper, we aim to fill in the gap and
design a Chinese fake audio detection dataset (CFAD) for studying more
generalized detection methods. Twelve mainstream speech-generation techniques
are used to generate fake audio. To simulate the real-life scenarios, three
noise datasets are selected for noise adding at five different signal-to-noise
ratios, and six codecs are considered for audio transcoding (format
conversion). CFAD dataset can be used not only for fake audio detection but
also for detecting the algorithms of fake utterances for audio forensics.
Baseline results are presented with analysis. The results that show fake audio
detection methods with generalization remain challenging. The CFAD dataset is
publicly available at: https://zenodo.org/record/8122764.Comment: FAD renamed as CFA
Dictionary Attacks on Speaker Verification
In this paper, we propose dictionary attacks against speaker verification - a novel attack vector that aims to match a large fraction of speaker population by chance. We introduce a generic formulation of the attack that can be used with various speech representations and threat models. The attacker uses adversarial optimization to maximize raw similarity of speaker embeddings between a seed speech sample and a proxy population. The resulting master voice successfully matches a non-trivial fraction of people in an unknown population. Adversarial waveforms obtained with our approach can match on average 69% of females and 38% of males enrolled in the target system at a strict decision threshold calibrated to yield false alarm rate of 1%. By using the attack with a black-box voice cloning system, we obtain master voices that are effective in the most challenging conditions and transferable between speaker encoders. We also show that, combined with multiple attempts, this attack opens even more to serious issues on the security of these systems
- …