3,275 research outputs found
Supervised Speaker Embedding De-Mixing in Two-Speaker Environment
Separating different speaker properties from a multi-speaker environment is
challenging. Instead of separating a two-speaker signal in signal space like
speech source separation, a speaker embedding de-mixing approach is proposed.
The proposed approach separates different speaker properties from a two-speaker
signal in embedding space. The proposed approach contains two steps. In step
one, the clean speaker embeddings are learned and collected by a residual TDNN
based network. In step two, the two-speaker signal and the embedding of one of
the speakers are both input to a speaker embedding de-mixing network. The
de-mixing network is trained to generate the embedding of the other speaker by
reconstruction loss. Speaker identification accuracy and the cosine similarity
score between the clean embeddings and the de-mixed embeddings are used to
evaluate the quality of the obtained embeddings. Experiments are done in two
kind of data: artificial augmented two-speaker data (TIMIT) and real world
recording of two-speaker data (MC-WSJ). Six different speaker embedding
de-mixing architectures are investigated. Comparing with the performance on the
clean speaker embeddings, the obtained results show that one of the proposed
architectures obtained close performance, reaching 96.9% identification
accuracy and 0.89 cosine similarity.Comment: Published at SLT202
Deep Learning for Audio Signal Processing
Given the recent surge in developments of deep learning, this article
provides a review of the state-of-the-art deep learning techniques for audio
signal processing. Speech, music, and environmental sound processing are
considered side-by-side, in order to point out similarities and differences
between the domains, highlighting general methods, problems, key references,
and potential for cross-fertilization between areas. The dominant feature
representations (in particular, log-mel spectra and raw waveform) and deep
learning models are reviewed, including convolutional neural networks, variants
of the long short-term memory architecture, as well as more audio-specific
neural network models. Subsequently, prominent deep learning application areas
are covered, i.e. audio recognition (automatic speech recognition, music
information retrieval, environmental sound detection, localization and
tracking) and synthesis and transformation (source separation, audio
enhancement, generative models for speech, sound, and music synthesis).
Finally, key issues and future questions regarding deep learning applied to
audio signal processing are identified.Comment: 15 pages, 2 pdf figure
Unsupervised Learning of Semantic Audio Representations
Even in the absence of any explicit semantic annotation, vast collections of
audio recordings provide valuable information for learning the categorical
structure of sounds. We consider several class-agnostic semantic constraints
that apply to unlabeled nonspeech audio: (i) noise and translations in time do
not change the underlying sound category, (ii) a mixture of two sound events
inherits the categories of the constituents, and (iii) the categories of events
in close temporal proximity are likely to be the same or related. Without
labels to ground them, these constraints are incompatible with classification
loss functions. However, they may still be leveraged to identify geometric
inequalities needed for triplet loss-based training of convolutional neural
networks. The result is low-dimensional embeddings of the input spectrograms
that recover 41% and 84% of the performance of their fully-supervised
counterparts when applied to downstream query-by-example sound retrieval and
sound event classification tasks, respectively. Moreover, in
limited-supervision settings, our unsupervised embeddings double the
state-of-the-art classification performance.Comment: Submitted to ICASSP 201
Tune-In: Training Under Negative Environments with Interference for Attention Networks Simulating Cocktail Party Effect
We study the cocktail party problem and propose a novel attention network
called Tune-In, abbreviated for training under negative environments with
interference. It firstly learns two separate spaces of speaker-knowledge and
speech-stimuli based on a shared feature space, where a new block structure is
designed as the building block for all spaces, and then cooperatively solves
different tasks. Between the two spaces, information is cast towards each other
via a novel cross- and dual-attention mechanism, mimicking the bottom-up and
top-down processes of a human's cocktail party effect. It turns out that
substantially discriminative and generalizable speaker representations can be
learnt in severely interfered conditions via our self-supervised training. The
experimental results verify this seeming paradox. The learnt speaker embedding
has superior discriminative power than a standard speaker verification method;
meanwhile, Tune-In achieves remarkably better speech separation performances in
terms of SI-SNRi and SDRi consistently in all test modes, and especially at
lower memory and computational consumption, than state-of-the-art benchmark
systems.Comment: Accepted in AAAI 202
Online Speaker Separation Using Deep Clustering
In this thesis, a low-latency variant of speaker-independent deep clustering method is
proposed for speaker separation. Compared to the offline deep clustering separation
system, bidirectional long-short term memory networks (BLSTMs) are replaced with
long-short term memory networks (LSTMs). The reason is that the data has to be
fed to the BLSTM networks both forward and backward directions. Additionally, the
final outputs depend on both directions, which make online processing not possible.
Also, 32 ms synthesis window is replaced with 8 ms in order to cooperate with low-
latency applications like hearing aids since the algorithmic latency depends upon the
length of synthesis window. Furthermore, the beginning of the audio mixture, here,
referred as buffer, is used to get the cluster centers for the constituent speakers in the
mixture serving as the initialization purpose. Later, those centers are used to assign
clusters for the rest of the mixture to achieve speaker separation with the latency
of 8 ms. The algorithm is evaluated on the Wall Street Journal corpus (WSJ0).
Changing the networks from BLSTM to LSTM while keeping the same window
length degrades the separation performance measured by signal-to-distortion ratio
(SDR) by 1.0 dB, which implies that the future information is important for the
separation. For investigating the effect of window length, keeping the same network
structure (LSTM), by changing window length from 32 ms to 8 ms, another 1.1 dB
drop in SDR is found. For the low-latency deep clustering speaker separation system,
different duration of buffer is studied. It is observed that initially, the separation
performance increases as the buffer increases. However, with buffer length of 0.3 s,
the separation performance keeps steady even by increasing the buffer. Compared to
offline deep clustering separation system, degradation of 2.8 dB in SDR is observed
for online system
Self-supervised learning for robust voice cloning
Voice cloning is a difficult task which requires robust and informative
features incorporated in a high quality TTS system in order to effectively copy
an unseen speaker's voice. In our work, we utilize features learned in a
self-supervised framework via the Bootstrap Your Own Latent (BYOL) method,
which is shown to produce high quality speech representations when specific
audio augmentations are applied to the vanilla algorithm. We further extend the
augmentations in the training procedure to aid the resulting features to
capture the speaker identity and to make them robust to noise and acoustic
conditions. The learned features are used as pre-trained utterance-level
embeddings and as inputs to a Non-Attentive Tacotron based architecture, aiming
to achieve multispeaker speech synthesis without utilizing additional speaker
features. This method enables us to train our model in an unlabeled
multispeaker dataset as well as use unseen speaker embeddings to copy a
speaker's voice. Subjective and objective evaluations are used to validate the
proposed model, as well as the robustness to the acoustic conditions of the
target utterance.Comment: Accepted to INTERSPEECH 202
- …