2,550 research outputs found
Multi-talker Speech Separation with Utterance-level Permutation Invariant Training of Deep Recurrent Neural Networks
In this paper we propose the utterance-level Permutation Invariant Training
(uPIT) technique. uPIT is a practically applicable, end-to-end, deep learning
based solution for speaker independent multi-talker speech separation.
Specifically, uPIT extends the recently proposed Permutation Invariant Training
(PIT) technique with an utterance-level cost function, hence eliminating the
need for solving an additional permutation problem during inference, which is
otherwise required by frame-level PIT. We achieve this using Recurrent Neural
Networks (RNNs) that, during training, minimize the utterance-level separation
error, hence forcing separated frames belonging to the same speaker to be
aligned to the same output stream. In practice, this allows RNNs, trained with
uPIT, to separate multi-talker mixed speech without any prior knowledge of
signal duration, number of speakers, speaker identity or gender. We evaluated
uPIT on the WSJ0 and Danish two- and three-talker mixed-speech separation tasks
and found that uPIT outperforms techniques based on Non-negative Matrix
Factorization (NMF) and Computational Auditory Scene Analysis (CASA), and
compares favorably with Deep Clustering (DPCL) and the Deep Attractor Network
(DANet). Furthermore, we found that models trained with uPIT generalize well to
unseen speakers and languages. Finally, we found that a single model, trained
with uPIT, can handle both two-speaker, and three-speaker speech mixtures
Adversarial Network Bottleneck Features for Noise Robust Speaker Verification
In this paper, we propose a noise robust bottleneck feature representation
which is generated by an adversarial network (AN). The AN includes two cascade
connected networks, an encoding network (EN) and a discriminative network (DN).
Mel-frequency cepstral coefficients (MFCCs) of clean and noisy speech are used
as input to the EN and the output of the EN is used as the noise robust
feature. The EN and DN are trained in turn, namely, when training the DN, noise
types are selected as the training labels and when training the EN, all labels
are set as the same, i.e., the clean speech label, which aims to make the AN
features invariant to noise and thus achieve noise robustness. We evaluate the
performance of the proposed feature on a Gaussian Mixture Model-Universal
Background Model based speaker verification system, and make comparison to MFCC
features of speech enhanced by short-time spectral amplitude minimum mean
square error (STSA-MMSE) and deep neural network-based speech enhancement
(DNN-SE) methods. Experimental results on the RSR2015 database show that the
proposed AN bottleneck feature (AN-BN) dramatically outperforms the STSA-MMSE
and DNN-SE based MFCCs for different noise types and signal-to-noise ratios.
Furthermore, the AN-BN feature is able to improve the speaker verification
performance under the clean condition
Effects of Lombard Reflex on the Performance of Deep-Learning-Based Audio-Visual Speech Enhancement Systems
Humans tend to change their way of speaking when they are immersed in a noisy
environment, a reflex known as Lombard effect. Current speech enhancement
systems based on deep learning do not usually take into account this change in
the speaking style, because they are trained with neutral (non-Lombard) speech
utterances recorded under quiet conditions to which noise is artificially
added. In this paper, we investigate the effects that the Lombard reflex has on
the performance of audio-visual speech enhancement systems based on deep
learning. The results show that a gap in the performance of as much as
approximately 5 dB between the systems trained on neutral speech and the ones
trained on Lombard speech exists. This indicates the benefit of taking into
account the mismatch between neutral and Lombard speech in the design of
audio-visual speech enhancement systems
Permutation Invariant Training of Deep Models for Speaker-Independent Multi-talker Speech Separation
We propose a novel deep learning model, which supports permutation invariant
training (PIT), for speaker independent multi-talker speech separation,
commonly known as the cocktail-party problem. Different from most of the prior
arts that treat speech separation as a multi-class regression problem and the
deep clustering technique that considers it a segmentation (or clustering)
problem, our model optimizes for the separation regression error, ignoring the
order of mixing sources. This strategy cleverly solves the long-lasting label
permutation problem that has prevented progress on deep learning based
techniques for speech separation. Experiments on the equal-energy mixing setup
of a Danish corpus confirms the effectiveness of PIT. We believe improvements
built upon PIT can eventually solve the cocktail-party problem and enable
real-world adoption of, e.g., automatic meeting transcription and multi-party
human-computer interaction, where overlapping speech is common.Comment: 5 page
Privacy Protection Performance of De-identified Face Images with and without Background
Li Meng, 'Privacy Protection Performance of De-identified Face Images with and without Background', paper presented at the 39th International Information and Communication Technology (ICT) Convention. Grand Hotel Adriatic Congress Centre and Admiral Hotel, Opatija, Croatia, May 30 - June 3, 2016.This paper presents an approach to blending a de-identified face region with its original background, for the purpose of completing the process of face de-identification. The re-identification risk of the de-identified FERET face images has been evaluated for the k-Diff-furthest face de-identification method, using several face recognition benchmark methods including PCA, LBP, HOG and LPQ. The experimental results show that the k-Diff-furthest face de-identification delivers high privacy protection within the face region while blending the de-identified face region with its original background may significantly increases the re-identification risk, indicating that de-identification must also be applied to image areas beyond the face region
Vocal Tract Length Perturbation for Text-Dependent Speaker Verification with Autoregressive Prediction Coding
In this letter, we propose a vocal tract length (VTL) perturbation method for
text-dependent speaker verification (TD-SV), in which a set of TD-SV systems
are trained, one for each VTL factor, and score-level fusion is applied to make
a final decision. Next, we explore the bottleneck (BN) feature extracted by
training deep neural networks with a self-supervised objective, autoregressive
predictive coding (APC), for TD-SV and compare it with the well-studied
speaker-discriminant BN feature. The proposed VTL method is then applied to APC
and speaker-discriminant BN features. In the end, we combine the VTL
perturbation systems trained on MFCC and the two BN features in the score
domain. Experiments are performed on the RedDots challenge 2016 database of
TD-SV using short utterances with Gaussian mixture model-universal background
model and i-vector techniques. Results show the proposed methods significantly
outperform the baselines.Comment: Copyright (c) 2021 IEEE. Personal use of this material is permitted.
Permission from IEEE must be obtained for all other uses, in any current or
future media, including reprinting/republishing this material for advertising
or promotional purposes, creating new collective works, for resale or
redistribution to servers or lists, or reuse of any copyrighted component of
this work in other work
Improving Label-Deficient Keyword Spotting Using Self-Supervised Pretraining
In recent years, the development of accurate deep keyword spotting (KWS)
models has resulted in KWS technology being embedded in a number of
technologies such as voice assistants. Many of these models rely on large
amounts of labelled data to achieve good performance. As a result, their use is
restricted to applications for which a large labelled speech data set can be
obtained. Self-supervised learning seeks to mitigate the need for large
labelled data sets by leveraging unlabelled data, which is easier to obtain in
large amounts. However, most self-supervised methods have only been
investigated for very large models, whereas KWS models are desired to be small.
In this paper, we investigate the use of self-supervised pretraining for the
smaller KWS models in a label-deficient scenario. We pretrain the Keyword
Transformer model using the self-supervised framework Data2Vec and carry out
experiments on a label-deficient setup of the Google Speech Commands data set.
It is found that the pretrained models greatly outperform the models without
pretraining, showing that Data2Vec pretraining can increase the performance of
KWS models in label-deficient scenarios. The source code is made publicly
available.Comment: 8 pages, 3 figures, 4 tables, Submitted to Northern Lights Deep
Learning Conference 202
- …