9,725 research outputs found
Block-Online Multi-Channel Speech Enhancement Using DNN-Supported Relative Transfer Function Estimates
This work addresses the problem of block-online processing for multi-channel
speech enhancement. Such processing is vital in scenarios with moving speakers
and/or when very short utterances are processed, e.g., in voice assistant
scenarios. We consider several variants of a system that performs beamforming
supported by DNN-based voice activity detection (VAD) followed by
post-filtering. The speaker is targeted through estimating relative transfer
functions between microphones. Each block of the input signals is processed
independently in order to make the method applicable in highly dynamic
environments. Owing to the short length of the processed block, the statistics
required by the beamformer are estimated less precisely. The influence of this
inaccuracy is studied and compared to the processing regime when recordings are
treated as one block (batch processing). The experimental evaluation of the
proposed method is performed on large datasets of CHiME-4 and on another
dataset featuring moving target speaker. The experiments are evaluated in terms
of objective and perceptual criteria (such as signal-to-interference ratio
(SIR) or perceptual evaluation of speech quality (PESQ), respectively).
Moreover, word error rate (WER) achieved by a baseline automatic speech
recognition system is evaluated, for which the enhancement method serves as a
front-end solution. The results indicate that the proposed method is robust
with respect to short length of the processed block. Significant improvements
in terms of the criteria and WER are observed even for the block length of 250
ms.Comment: 10 pages, 8 figures, 4 tables. Modified version of the article
accepted for publication in IET Signal Processing journal. Original results
unchanged, additional experiments presented, refined discussion and
conclusion
The 2005 AMI system for the transcription of speech in meetings
In this paper we describe the 2005 AMI system for the transcription\ud
of speech in meetings used for participation in the 2005 NIST\ud
RT evaluations. The system was designed for participation in the speech\ud
to text part of the evaluations, in particular for transcription of speech\ud
recorded with multiple distant microphones and independent headset\ud
microphones. System performance was tested on both conference room\ud
and lecture style meetings. Although input sources are processed using\ud
different front-ends, the recognition process is based on a unified system\ud
architecture. The system operates in multiple passes and makes use\ud
of state of the art technologies such as discriminative training, vocal\ud
tract length normalisation, heteroscedastic linear discriminant analysis,\ud
speaker adaptation with maximum likelihood linear regression and minimum\ud
word error rate decoding. In this paper we describe the system performance\ud
on the official development and test sets for the NIST RT05s\ud
evaluations. The system was jointly developed in less than 10 months\ud
by a multi-site team and was shown to achieve very competitive performance
- âŠ