1 research outputs found
Recommended from our members
An initial investigation of long-term adaptation for meeting transcription
Meeting transcription is a very useful and challenging task. The majority of research to date has focused on individual meeting, or only a small group of meetings. In many practical deployments, multiple related meetings will take place over a long period of time. This paper describes an initial investigation of how this long-term data can be used to improve meeting transcription. A corpus of technical meetings, using a single microphone array, was collected over a two year period, yielding a total of 179 hours of meeting data. Baseline systems using deep neural network acoustic models, in both Tandem and Hybrid configurations, and neural network-based language models are described. The impact of supervised and unsupervised adaptation of the acoustic models is then evaluated, as well as the impact of improved language models.Xie Chen would like to thank Toshiba Research Europe Ltd, Cambridge Research Lab, for funding his work. The authors would like to thank the Toshiba Cambridge Speech Group for allowing the data to be collected, also would like to thank Chao Zhang and Eric Wang for providing DNN and CMLLR transform tools.This is the final published version of the article. It was originally published in INTERSPEECH 2014 (Chen, X. / Gales, Mark J. F. / Knill, Kate M. / Breslin, Catherine / Chen, Langzhou / Chin, K. K. / Wan, Vincent (2014): "An initial investigation of long-term adaptation for meeting transcription", In INTERSPEECH-2014, 954-958)