5,443 research outputs found
A Framework For Automatic Code Switching Speech Recognition With Multilingual Acoustic And Pronunciation Models Adaptation
Recognition of code-switching speech is a challenging problem because of three issues. Code-switching is not a simple mixing of two languages, but each has its own phonological, lexical, and grammatical variations. Second, code-switching resources, such as speech and text corpora, are limited and difficult to collect. Therefore, creating code-switching speech recognition models may require a different strategy from that typically used for monolingual automatic speech recognition (ASR). Third, a segment of language switching in an utterance can be as short as a word or as long as an utterance itself. This variation may make language identification difficult. In this thesis, we propose a novel approach to achieve automatic recognition of code-switching speech. The proposed method consists of two phases, namely, ASR and rescoring. The framework uses parallel automatic speech recognizers for speech recognition. We also put forward the usage of an acoustic model adaptation approach known as hybrid approach of interpolation and merging to cross-adapt acoustic models of different languages to recognize code-switching speech better. In pronunciation modeling, we propose an approach to model the pronunciation of non-native accented speech for an ASR system. Our approach is tested on two code-switching corpora: Malay–English and Mandarin–English. The word error rate for Malay–English code-switching speech recognition reduced from 33.2% to 25.2% while that for Mandarin–English code-switching speech recognition reduced from 81.2% to 56.3% when our proposed approaches are applied. This result shows that the proposed approaches are promising to treat code-switching speech
Dual Language Models for Code Switched Speech Recognition
In this work, we present a simple and elegant approach to language modeling
for bilingual code-switched text. Since code-switching is a blend of two or
more different languages, a standard bilingual language model can be improved
upon by using structures of the monolingual language models. We propose a novel
technique called dual language models, which involves building two
complementary monolingual language models and combining them using a
probabilistic model for switching between the two. We evaluate the efficacy of
our approach using a conversational Mandarin-English speech corpus. We prove
the robustness of our model by showing significant improvements in perplexity
measures over the standard bilingual language model without the use of any
external information. Similar consistent improvements are also reflected in
automatic speech recognition error rates.Comment: Accepted at Interspeech 201
Constrained Output Embeddings for End-to-End Code-Switching Speech Recognition with Only Monolingual Data
The lack of code-switch training data is one of the major concerns in the
development of end-to-end code-switching automatic speech recognition (ASR)
models. In this work, we propose a method to train an improved end-to-end
code-switching ASR using only monolingual data. Our method encourages the
distributions of output token embeddings of monolingual languages to be
similar, and hence, promotes the ASR model to easily code-switch between
languages. Specifically, we propose to use Jensen-Shannon divergence and cosine
distance based constraints. The former will enforce output embeddings of
monolingual languages to possess similar distributions, while the later simply
brings the centroids of two distributions to be close to each other.
Experimental results demonstrate high effectiveness of the proposed method,
yielding up to 4.5% absolute mixed error rate improvement on Mandarin-English
code-switching ASR task.Comment: 5 pages, 3 figures, accepted to INTERSPEECH 201
- …