130 research outputs found
The 2005 AMI system for the transcription of speech in meetings
In this paper we describe the 2005 AMI system for the transcription\ud
of speech in meetings used for participation in the 2005 NIST\ud
RT evaluations. The system was designed for participation in the speech\ud
to text part of the evaluations, in particular for transcription of speech\ud
recorded with multiple distant microphones and independent headset\ud
microphones. System performance was tested on both conference room\ud
and lecture style meetings. Although input sources are processed using\ud
different front-ends, the recognition process is based on a unified system\ud
architecture. The system operates in multiple passes and makes use\ud
of state of the art technologies such as discriminative training, vocal\ud
tract length normalisation, heteroscedastic linear discriminant analysis,\ud
speaker adaptation with maximum likelihood linear regression and minimum\ud
word error rate decoding. In this paper we describe the system performance\ud
on the official development and test sets for the NIST RT05s\ud
evaluations. The system was jointly developed in less than 10 months\ud
by a multi-site team and was shown to achieve very competitive performance
Speaker adaptation and adaptive training for jointly optimised tandem systems
Speaker independent (SI) Tandem systems trained by joint optimisation
of bottleneck (BN) deep neural networks (DNNs) and
Gaussian mixture models (GMMs) have been found to produce
similar word error rates (WERs) to Hybrid DNN systems. A
key advantage of using GMMs is that existing speaker adaptation
methods, such as maximum likelihood linear regression
(MLLR), can be used which to account for diverse speaker
variations and improve system robustness. This paper investigates
speaker adaptation and adaptive training (SAT) schemes
for jointly optimised Tandem systems. Adaptation techniques
investigated include constrained MLLR (CMLLR) transforms
based on BN features for SAT as well as MLLR and parameterised
sigmoid functions for unsupervised test-time adaptation.
Experiments using English multi-genre broadcast (MGB3) data
show that CMLLR SAT yields a 4% relative WER reduction
over jointly trained Tandem and Hybrid SI systems, and further
reductions in WER are obtained by system combination
Environmentally robust ASR front-end for deep neural network acoustic models
This paper examines the individual and combined impacts of various front-end approaches on the performance of deep neural network (DNN) based speech recognition systems in distant talking situations, where acoustic environmental distortion degrades the recognition performance. Training of a DNN-based acoustic model consists of generation of state alignments followed by learning the network parameters. This paper first shows that the network parameters are more sensitive to the speech quality than the alignments and thus this stage requires improvement. Then, various front-end robustness approaches to addressing this problem are categorised based on functionality. The degree to which each class of approaches impacts the performance of DNN-based acoustic models is examined experimentally. Based on the results, a front-end processing pipeline is proposed for efficiently combining different classes of approaches. Using this front-end, the combined effects of different classes of approaches are further evaluated in a single distant microphone-based meeting transcription task with both speaker independent (SI) and speaker adaptive training (SAT) set-ups. By combining multiple speech enhancement results, multiple types of features, and feature transformation, the front-end shows relative performance gains of 7.24% and 9.83% in the SI and SAT scenarios, respectively, over competitive DNN-based systems using log mel-filter bank features.This is the final version of the article. It first appeared from Elsevier via http://dx.doi.org/10.1016/j.csl.2014.11.00
Recommended from our members
Multi-basis adaptive neural network for rapid adaptation in speech recognition
Automatic speech recognition: from study to practice
Today, automatic speech recognition (ASR) is widely used for different purposes such as robotics, multimedia, medical and industrial application. Although many researches have been performed in this field in the past decades, there is still a lot of room to work. In order to start working in this area, complete knowledge of ASR systems as well as their weak points and problems is inevitable. Besides that, practical experience improves the theoretical knowledge understanding in a reliable way. Regarding to these facts, in this master thesis, we have first reviewed the principal structure of the standard HMM-based ASR systems from technical point of view. This includes, feature extraction, acoustic modeling, language modeling and decoding. Then, the most significant challenging points in ASR systems is discussed. These challenging points address different internal components characteristics or external agents which affect the ASR systems performance. Furthermore, we have implemented a Spanish language recognizer using HTK toolkit. Finally, two open research lines according to the studies of different sources in the field of ASR has been suggested for future work
- …