92,520 research outputs found
Complex Recurrent Variational Autoencoder for Speech Enhancement
Commonly-used methods in speech enhancement are based on short-time fourier
transform (STFT) representation, in particular on the magnitude of the STFT.
This is because phase is naturally unstructured and intractable, and magnitude
has shown more importance in speech enhancement. Nevertheless, phase has shown
its significance in some research and cannot be ignored. Complex neural
networks, with their inherent advantage, provide a solution for complex
spectrogram processing. Complex variational autoencoder (VAE), as an extension
of vanilla \acrshort{vae}, has shown positive results in complex spectrogram
representation. However, the existing work on complex \acrshort{vae} only uses
linear layers and merely applies the model on direct spectra representation.
This paper extends the linear complex \acrshort{vae} to a non-linear one.
Furthermore, on account of the temporal property of speech signals, a complex
recurrent \acrshort{vae} is proposed. The proposed model has been applied on
speech enhancement. As far as we know, it is the first time that a complex
generative model is applied to speech enhancement. Experiments are based on the
TIMIT dataset, while speech intelligibility and speech quality have been
evaluated. The results show that, for speech enhancement, the proposed method
has better performance on speech intelligibility and comparable performance on
speech quality.Comment: submitted to INTERSPEECH 202
On Speech Pre-emphasis as a Simple and Inexpensive Method to Boost Speech Enhancement
Pre-emphasis filtering, compensating for the natural energy decay of speech
at higher frequencies, has been considered as a common pre-processing step in a
number of speech processing tasks over the years. In this work, we demonstrate,
for the first time, that pre-emphasis filtering may also be used as a simple
and computationally-inexpensive way to leverage deep neural network-based
speech enhancement performance. Particularly, we look into pre-emphasizing the
estimated and actual clean speech prior to loss calculation so that different
speech frequency components better mirror their perceptual importance during
the training phase. Experimental results on a noisy version of the TIMIT
dataset show that integrating the pre-emphasis-based methodology at hand yields
relative estimated speech quality improvements of up to 4.6% and 3.4% for noise
types seen and unseen, respectively, during the training phase. Similar to the
case of pre-emphasis being considered as a default pre-processing step in
classical automatic speech recognition and speech coding systems, the
pre-emphasis-based methodology analyzed in this article may potentially become
a default add-on for modern speech enhancement
DNN-Assisted Speech Enhancement Approaches Incorporating Phase Information
Speech enhancement is a widely adopted technique that removes the interferences in a corrupted speech to improve the speech quality and intelligibility. Speech enhancement methods can be implemented in either time domain or time-frequency (T-F) domain. Among various proposed methods, the time-frequency domain methods, which synthesize the enhanced speech with the estimated magnitude spectrogram and the noisy phase spectrogram, gain the most popularity in the past few decades. However, this kind of techniques tend to ignore the importance of phase processing. To overcome this problem, the thesis aims to jointly enhance the magnitude and phase spectra by means of the most recent deep neural networks (DNNs). More specifically, three major contributions are presented in this thesis.
First, we present new schemes based on the basic Kalman filter (KF) to remove the background noise in the noisy speech in time domain, where the KF acts as joint estimator for both the magnitude and phase spectra of speech. A DNN-augmented basic KF is first proposed, where DNN is applied for estimating key parameters in the KF, namely the linear prediction coefficients (LPCs). By training the DNN with a large database and making use of the powerful learning ability of DNN, the proposed algorithm is able to estimate LPCs from noisy speech more accurately and robustly, leading to an improved performance as compared to traditional KF based approaches in speech enhancement. We further present a high-frequency (HF) component restoration algorithm to extenuate the degradation in the HF regions of the Kalman-filtered speech, in which the DNN-based bandwidth extension is applied to estimate the magnitude of HF component from the low-frequency (LF) counterpart. By incorporating the restoration algorithm, the enhanced speech suffers less distortion in the HF component. Moreover, we propose a hybrid speech enhancement system that exploits DNN for speech reconstruction and Kalman filtering for further denoising. Two separate networks are adopted in the estimation of magnitude spectrogram and LPCs of the clean speech, respectively. The estimated clean magnitude spectrogram is combined with the phase of the noisy speech to reconstruct the estimated clean speech. A KF with the estimated parameters is then utilized to remove the residual noise in the reconstructed speech. The proposed hybrid system takes advantages of both the DNN-based reconstruction and traditional Kalman filtering, and can work reliably in either matched or unmatched acoustic environments.
Next, we incorporate the DNN-based parameter estimation scheme in two advanced KFs: subband KF and colored-noise KF. The DNN-augmented subband KF method decomposes the noisy speech into several subbands, and performs Kalman filtering to each subband speech, where the parameters of the KF are estimated by the trained DNN. The final enhanced speech is then obtained by synthesizing the enhanced subband speeches. In the DNN-augmented colored-noise KF system, both clean speech and noise are modelled as autoregressive (AR) processes, whose parameters comprise the LPCs and the driving noise variances. The LPCs are obtained through training a multi-objective DNN, while the driving noise variances are obtained by solving an optimization problem aiming to minimize the difference between the modelled and observed AR spectra of the noisy speech. The colored-noise Kalman filter with DNN-estimated parameters is then applied
to the noisy speech for denoising. A post-subtraction technique is adopted to further remove the residual noise in the Kalman-filtered speech. Extensive computer simulations show that the two proposed advanced KF systems achieve significant performance gains when compared to conventional Kalman filter based algorithms as well as recent DNN-based methods under both seen and unseen noise conditions.
Finally, we focus on the T-F domain speech enhancement with masking technique, which aims to retain the speech dominant components and suppress the noise dominant parts of the noisy speech. We first derive a new type of mask, namely constrained ratio mask (CRM), to better control the trade-off between speech distortion and residual noise in the enhanced speech. The CRM is estimated with a trained DNN based on the input noisy feature set and is applied to the noisy magnitude spectrogram for denoising. We further extend the CRM to the complex spectrogram estimation, where the enhanced magnitude spectrogram is obtained with the CRM, while the estimated phase spectrogram is reconstructed with the noisy phase spectrogram and the phase derivatives. Performance evaluation reveals our proposed CRM outperforms several traditional masks in terms of objective metrics. Moreover, the enhanced speech resulting from the CRM based complex spectrogram estimation has a better speech quality than that obtained without using phase reconstruction
Deep Learning for Environmentally Robust Speech Recognition: An Overview of Recent Developments
Eliminating the negative effect of non-stationary environmental noise is a
long-standing research topic for automatic speech recognition that stills
remains an important challenge. Data-driven supervised approaches, including
ones based on deep neural networks, have recently emerged as potential
alternatives to traditional unsupervised approaches and with sufficient
training, can alleviate the shortcomings of the unsupervised methods in various
real-life acoustic environments. In this light, we review recently developed,
representative deep learning approaches for tackling non-stationary additive
and convolutional degradation of speech with the aim of providing guidelines
for those involved in the development of environmentally robust speech
recognition systems. We separately discuss single- and multi-channel techniques
developed for the front-end and back-end of speech recognition systems, as well
as joint front-end and back-end training frameworks
SNR-Based Teachers-Student Technique for Speech Enhancement
It is very challenging for speech enhancement methods to achieves robust
performance under both high signal-to-noise ratio (SNR) and low SNR
simultaneously. In this paper, we propose a method that integrates an SNR-based
teachers-student technique and time-domain U-Net to deal with this problem.
Specifically, this method consists of multiple teacher models and a student
model. We first train the teacher models under multiple small-range SNRs that
do not coincide with each other so that they can perform speech enhancement
well within the specific SNR range. Then, we choose different teacher models to
supervise the training of the student model according to the SNR of the
training data. Eventually, the student model can perform speech enhancement
under both high SNR and low SNR. To evaluate the proposed method, we
constructed a dataset with an SNR ranging from -20dB to 20dB based on the
public dataset. We experimentally analyzed the effectiveness of the SNR-based
teachers-student technique and compared the proposed method with several
state-of-the-art methods.Comment: Published in 2020 IEEE International Conference on Multimedia and
Expo (ICME 2020
- …