15 research outputs found
Adversarial Reprogramming of Text Classification Neural Networks
Adversarial Reprogramming has demonstrated success in utilizing pre-trained
neural network classifiers for alternative classification tasks without
modification to the original network. An adversary in such an attack scenario
trains an additive contribution to the inputs to repurpose the neural network
for the new classification task. While this reprogramming approach works for
neural networks with a continuous input space such as that of images, it is not
directly applicable to neural networks trained for tasks such as text
classification, where the input space is discrete. Repurposing such
classification networks would require the attacker to learn an adversarial
program that maps inputs from one discrete space to the other. In this work, we
introduce a context-based vocabulary remapping model to reprogram neural
networks trained on a specific sequence classification task, for a new sequence
classification task desired by the adversary. We propose training procedures
for this adversarial program in both white-box and black-box settings. We
demonstrate the application of our model by adversarially repurposing various
text-classification models including LSTM, bi-directional LSTM and CNN for
alternate classification tasks
Expediting TTS Synthesis with Adversarial Vocoding
Recent approaches in text-to-speech (TTS) synthesis employ neural network
strategies to vocode perceptually-informed spectrogram representations directly
into listenable waveforms. Such vocoding procedures create a computational
bottleneck in modern TTS pipelines. We propose an alternative approach which
utilizes generative adversarial networks (GANs) to learn mappings from
perceptually-informed spectrograms to simple magnitude spectrograms which can
be heuristically vocoded. Through a user study, we show that our approach
significantly outperforms na\"ive vocoding strategies while being hundreds of
times faster than neural network vocoders used in state-of-the-art TTS systems.
We also show that our method can be used to achieve state-of-the-art results in
unsupervised synthesis of individual words of speech.Comment: Published as a conference paper at INTERSPEECH 201
Universal Adversarial Perturbations for Speech Recognition Systems
In this work, we demonstrate the existence of universal adversarial audio
perturbations that cause mis-transcription of audio signals by automatic speech
recognition (ASR) systems. We propose an algorithm to find a single
quasi-imperceptible perturbation, which when added to any arbitrary speech
signal, will most likely fool the victim speech recognition model. Our
experiments demonstrate the application of our proposed technique by crafting
audio-agnostic universal perturbations for the state-of-the-art ASR system --
Mozilla DeepSpeech. Additionally, we show that such perturbations generalize to
a significant extent across models that are not available during training, by
performing a transferability test on a WaveNet based ASR system.Comment: Published as a conference paper at INTERSPEECH 201
FastWave: Accelerating Autoregressive Convolutional Neural Networks on FPGA
Autoregressive convolutional neural networks (CNNs) have been widely
exploited for sequence generation tasks such as audio synthesis, language
modeling and neural machine translation. WaveNet is a deep autoregressive CNN
composed of several stacked layers of dilated convolution that is used for
sequence generation. While WaveNet produces state-of-the art audio generation
results, the naive inference implementation is quite slow; it takes a few
minutes to generate just one second of audio on a high-end GPU. In this work,
we develop the first accelerator platform~\textit{FastWave} for autoregressive
convolutional neural networks, and address the associated design challenges. We
design the Fast-Wavenet inference model in Vivado HLS and perform a wide range
of optimizations including fixed-point implementation, array partitioning and
pipelining. Our model uses a fully parameterized parallel architecture for fast
matrix-vector multiplication that enables per-layer customized latency
fine-tuning for further throughput improvement. Our experiments comparatively
assess the trade-off between throughput and resource utilization for various
optimizations. Our best WaveNet design on the Xilinx XCVU13P FPGA that uses
only on-chip memory, achieves 66 faster generation speed compared to CPU
implementation and 11 faster generation speed than GPU implementation.Comment: Published as a conference paper at ICCAD 201
REMARK-LLM: A Robust and Efficient Watermarking Framework for Generative Large Language Models
We present REMARK-LLM, a novel efficient, and robust watermarking framework
designed for texts generated by large language models (LLMs). Synthesizing
human-like content using LLMs necessitates vast computational resources and
extensive datasets, encapsulating critical intellectual property (IP). However,
the generated content is prone to malicious exploitation, including spamming
and plagiarism. To address the challenges, REMARK-LLM proposes three new
components: (i) a learning-based message encoding module to infuse binary
signatures into LLM-generated texts; (ii) a reparameterization module to
transform the dense distributions from the message encoding to the sparse
distribution of the watermarked textual tokens; (iii) a decoding module
dedicated for signature extraction; Furthermore, we introduce an optimized beam
search algorithm to guarantee the coherence and consistency of the generated
content. REMARK-LLM is rigorously trained to encourage the preservation of
semantic integrity in watermarked content, while ensuring effective watermark
retrieval. Extensive evaluations on multiple unseen datasets highlight
REMARK-LLM proficiency and transferability in inserting 2 times more signature
bits into the same texts when compared to prior art, all while maintaining
semantic integrity. Furthermore, REMARK-LLM exhibits better resilience against
a spectrum of watermark detection and removal attacks
ACE-VC: Adaptive and Controllable Voice Conversion using Explicitly Disentangled Self-supervised Speech Representations
In this work, we propose a zero-shot voice conversion method using speech
representations trained with self-supervised learning. First, we develop a
multi-task model to decompose a speech utterance into features such as
linguistic content, speaker characteristics, and speaking style. To disentangle
content and speaker representations, we propose a training strategy based on
Siamese networks that encourages similarity between the content representations
of the original and pitch-shifted audio. Next, we develop a synthesis model
with pitch and duration predictors that can effectively reconstruct the speech
signal from its decomposed representation. Our framework allows controllable
and speaker-adaptive synthesis to perform zero-shot any-to-any voice conversion
achieving state-of-the-art results on metrics evaluating speaker similarity,
intelligibility, and naturalness. Using just 10 seconds of data for a target
speaker, our framework can perform voice swapping and achieves a speaker
verification EER of 5.5% for seen speakers and 8.4% for unseen speakers.Comment: Published as a conference paper at ICASSP 202