12 research outputs found

    Noise Reduction with Optimal Variable Span Linear Filters

    Get PDF

    Rank-1 Constrained Multichannel Wiener Filter for Speech Recognition in Noisy Environments

    Get PDF
    Multichannel linear filters, such as the Multichannel Wiener Filter (MWF) and the Generalized Eigenvalue (GEV) beamformer are popular signal processing techniques which can improve speech recognition performance. In this paper, we present an experimental study on these linear filters in a specific speech recognition task, namely the CHiME-4 challenge, which features real recordings in multiple noisy environments. Specifically, the rank-1 MWF is employed for noise reduction and a new constant residual noise power constraint is derived which enhances the recognition performance. To fulfill the underlying rank-1 assumption, the speech covariance matrix is reconstructed based on eigenvectors or generalized eigenvectors. Then the rank-1 constrained MWF is evaluated with alternative multichannel linear filters under the same framework, which involves a Bidirectional Long Short-Term Memory (BLSTM) network for mask estimation. The proposed filter outperforms alternative ones, leading to a 40% relative Word Error Rate (WER) reduction compared with the baseline Weighted Delay and Sum (WDAS) beamformer on the real test set, and a 15% relative WER reduction compared with the GEV-BAN method. The results also suggest that the speech recognition accuracy correlates more with the Mel-frequency cepstral coefficients (MFCC) feature variance than with the noise reduction or the speech distortion level.Comment: for Computer Speech and Languag

    Sound Zones as an Optimal Filtering Problem

    Get PDF

    Signal-Adaptive and Perceptually Optimized Sound Zones with Variable Span Trade-Off Filters

    Get PDF
    Creating sound zones has been an active research field since the idea was first proposed. So far, most sound zone control methods rely on either an optimization of physical metrics such as acoustic contrast and signal distortion or a mode decomposition of the desired sound field. By using these types of methods, approximately 15 dB of acoustic contrast between the reproduced sound field in the target zone and its leakage to other zone(s) has been reported in practical set-ups, but this is typically not high enough to satisfy the people inside the zones. In this paper, we propose a sound zone control method shaping the leakage errors so that they are as inaudible as possible for a given acoustic contrast. The shaping of the leakage errors is performed by taking the time-varying input signal characteristics and the human auditory system into account when the loudspeaker control filters are calculated. We show how this shaping can be performed using variable span trade-off filters, and we show theoretically how these filters can be used for trading signal distortion in the target zone for acoustic contrast. The proposed method is evaluated based on physical metrics such as acoustic contrast and perceptual metrics such as STOI. The computational complexity and processing time of the proposed method for different system set-ups are also investigated. Lastly, the results of a MUSHRA listening test are reported. The test results show that the proposed method provides more than 20% perceptual improvement compared to existing sound zone control methods.Comment: Accepted for publication in IEEE/ACM TRANSACTIONS ON AUDIO, SPEECH, AND LANGUAGE PROCESSIN

    Fast Generation of Sound Zones Using Variable Span Trade-Off Filters in the DFT-Domain

    Get PDF

    Model-based speech enhancement for hearing aids

    Get PDF
    corecore