Integrating Grammar-Based Language Models into Domain-Specific ASR Systems

Abstract

Language Models (LMs) represent a crucial component in the architecture of Automatic Speech Recognition (ASR) systems. Current trends in this area point to the creation of high-performing and increasingly robust systems through the exploitation of large amounts of data. Even though the use of corpus-based models proves to be a dominant strategy for language modelling, it may not be the most suitable approach in some of today’s ASR applications. This is especially evident in domains where there is a strong interest in controlling the hypotheses generated by the system and producing only reliable outputs. Providing a deliberately constrained transcription can be more effectively achieved using a formal approach, and thus with the use of grammars, which ultimately contribute to better capturing the inherent structures of the target language. For these reasons, we present a tool that allows to efficiently integrate regular grammars as LMs in Kaldi, a widely used toolkit for speech recognition research. To the best of our knowledge, there is currently no existing tool that performs this task. We thus make it freely available along with some demo examples and crowdsourced evaluation corpora so that it can be used by researchers or developers in their own experiments and applications. </p

    Similar works

    Full text

    thumbnail-image

    Available Versions