2 research outputs found

    Simultaneous Denoising and Dereverberation Using Deep Embedding Features

    Full text link
    Monaural speech dereverberation is a very challenging task because no spatial cues can be used. When the additive noises exist, this task becomes more challenging. In this paper, we propose a joint training method for simultaneous speech denoising and dereverberation using deep embedding features, which is based on the deep clustering (DC). DC is a state-of-the-art method for speech separation that includes embedding learning and K-means clustering. As for our proposed method, it contains two stages: denoising and dereverberation. At the denoising stage, the DC network is leveraged to extract noise-free deep embedding features. These embedding features are generated from the anechoic speech and residual reverberation signals. They can represent the inferred spectral masking patterns of the desired signals, which are discriminative features. At the dereverberation stage, instead of using the unsupervised K-means clustering algorithm, another supervised neural network is utilized to estimate the anechoic speech from these deep embedding features. Finally, the denoising stage and dereverberation stage are optimized by the joint training method. Experimental results show that the proposed method outperforms the WPE and BLSTM baselines, especially in the low SNR condition

    Exploring the time-domain deep attractor network with two-stream architectures in a reverberant environment

    Full text link
    With the success of deep learning in speech signal processing, speaker-independent speech separation under a reverberant environment remains challenging. The deep attractor network (DAN) performs speech separation with speaker attractors on the time-frequency domain. The recently proposed convolutional time-domain audio separation network (Conv-TasNet) surpasses ideal masks in anechoic mixture signals, while its architecture renders the problem of separating signals with arbitrary numbers of speakers. Moreover, these models will suffer performance degradation in a reverberant environment. In this study, we propose a time-domain deep attractor network (TD-DAN) with two-stream convolutional networks that efficiently performs both dereverberation and separation tasks under the condition of variable numbers of speakers. The speaker encoding stream (SES) of the TD-DAN models speaker information, and is explored with various waveform encoders. The speech decoding steam (SDS) accepts speaker attractors from SES, and learns to predict early reflections. Experiment results demonstrated that the TD-DAN achieved scale-invariant source-to-distortion ratio (SI-SDR) gains of 10.40/9.78 dB and 9.15/7.92 dB on the reverberant two- and three-speaker development/evaluation set, exceeding Conv-TasNet by 1.55/1.33 dB and 0.94/1.21 dB, respectively
    corecore