2 research outputs found

    Multilingual Phoneme Models for Rapid Speech Processing System Development

    Get PDF
    Current speech recognition systems tend to be developed only for commercially viable languages. The resources needed for a typical speech recognition system include hundreds of hours of transcribed speech for acoustic models and 10 to 100 million words of text for language models; both of these requirements can be costly in time and money. The goal of this research is to facilitate rapid development of speech systems to new languages by using multilingual phoneme models to alleviate requirements for large amounts of transcribed speech. The Global Phone database, winch contains transcribed speech from 15 languages, is used as source data to derive multilingual phoneme models. Various bootstrapping processes arc used to develop an Arabic speech recognition system starting from monolingual English models, International Phonetic Association (IP based multilingual models, and data-driven multilingual models. The Kullback-Leibler distortion measure is used to derive data-driven phoneme clusters. It was found that multilingual bootstrapping methods outperform monolingual English bootstrapping methods on the Arabic evaluation data initially, and after three iterations of bootstrapping all systems show similar performance levels
    corecore