1,267 research outputs found
Investigating RNN-based speech enhancement methods for noise-robust Text-to-Speech
Deep Learning has been applied successfully to speech processing. In this paper we propose an architecture for speech synthesis using multiple speakers. Some hidden layers are shared by all the speakers, while there is a specific output layer for each speaker. Objective and perceptual experiments prove that this scheme produces much better results in comparison with sin-
gle speaker model. Moreover, we also tackle the problem of speaker interpolation by adding a new output layer (a-layer) on top of the multi-output branches. An identifying code is injected into the layer together with acoustic features of many speakers. Experiments show that the a-layer can effectively learn to interpolate the acoustic features between speakers.Peer ReviewedPostprint (published version
Deep Learning for Environmentally Robust Speech Recognition: An Overview of Recent Developments
Eliminating the negative effect of non-stationary environmental noise is a
long-standing research topic for automatic speech recognition that stills
remains an important challenge. Data-driven supervised approaches, including
ones based on deep neural networks, have recently emerged as potential
alternatives to traditional unsupervised approaches and with sufficient
training, can alleviate the shortcomings of the unsupervised methods in various
real-life acoustic environments. In this light, we review recently developed,
representative deep learning approaches for tackling non-stationary additive
and convolutional degradation of speech with the aim of providing guidelines
for those involved in the development of environmentally robust speech
recognition systems. We separately discuss single- and multi-channel techniques
developed for the front-end and back-end of speech recognition systems, as well
as joint front-end and back-end training frameworks
AutoPrep: An Automatic Preprocessing Framework for In-the-Wild Speech Data
Recently, the utilization of extensive open-sourced text data has
significantly advanced the performance of text-based large language models
(LLMs). However, the use of in-the-wild large-scale speech data in the speech
technology community remains constrained. One reason for this limitation is
that a considerable amount of the publicly available speech data is compromised
by background noise, speech overlapping, lack of speech segmentation
information, missing speaker labels, and incomplete transcriptions, which can
largely hinder their usefulness. On the other hand, human annotation of speech
data is both time-consuming and costly. To address this issue, we introduce an
automatic in-the-wild speech data preprocessing framework (AutoPrep) in this
paper, which is designed to enhance speech quality, generate speaker labels,
and produce transcriptions automatically. The proposed AutoPrep framework
comprises six components: speech enhancement, speech segmentation, speaker
clustering, target speech extraction, quality filtering and automatic speech
recognition. Experiments conducted on the open-sourced WenetSpeech and our
self-collected AutoPrepWild corpora demonstrate that the proposed AutoPrep
framework can generate preprocessed data with similar DNSMOS and PDNSMOS scores
compared to several open-sourced TTS datasets. The corresponding TTS system can
achieve up to 0.68 in-domain speaker similarity
D4AM: A General Denoising Framework for Downstream Acoustic Models
The performance of acoustic models degrades notably in noisy environments.
Speech enhancement (SE) can be used as a front-end strategy to aid automatic
speech recognition (ASR) systems. However, existing training objectives of SE
methods are not fully effective at integrating speech-text and noisy-clean
paired data for training toward unseen ASR systems. In this study, we propose a
general denoising framework, D4AM, for various downstream acoustic models. Our
framework fine-tunes the SE model with the backward gradient according to a
specific acoustic model and the corresponding classification objective. In
addition, our method aims to consider the regression objective as an auxiliary
loss to make the SE model generalize to other unseen acoustic models. To
jointly train an SE unit with regression and classification objectives, D4AM
uses an adjustment scheme to directly estimate suitable weighting coefficients
rather than undergoing a grid search process with additional training costs.
The adjustment scheme consists of two parts: gradient calibration and
regression objective weighting. The experimental results show that D4AM can
consistently and effectively provide improvements to various unseen acoustic
models and outperforms other combination setups. Specifically, when evaluated
on the Google ASR API with real noisy data completely unseen during SE
training, D4AM achieves a relative WER reduction of 24.65% compared with the
direct feeding of noisy input. To our knowledge, this is the first work that
deploys an effective combination scheme of regression (denoising) and
classification (ASR) objectives to derive a general pre-processor applicable to
various unseen ASR systems. Our code is available at
https://github.com/ChangLee0903/D4AM
SEGAN: Speech Enhancement Generative Adversarial Network
Current speech enhancement techniques operate on the spectral domain and/or
exploit some higher-level feature. The majority of them tackle a limited number
of noise conditions and rely on first-order statistics. To circumvent these
issues, deep networks are being increasingly used, thanks to their ability to
learn complex functions from large example sets. In this work, we propose the
use of generative adversarial networks for speech enhancement. In contrast to
current techniques, we operate at the waveform level, training the model
end-to-end, and incorporate 28 speakers and 40 different noise conditions into
the same model, such that model parameters are shared across them. We evaluate
the proposed model using an independent, unseen test set with two speakers and
20 alternative noise conditions. The enhanced samples confirm the viability of
the proposed model, and both objective and subjective evaluations confirm the
effectiveness of it. With that, we open the exploration of generative
architectures for speech enhancement, which may progressively incorporate
further speech-centric design choices to improve their performance.Comment: 5 pages, 4 figures, accepted in INTERSPEECH 201
- …