560 research outputs found

    The Microsoft 2017 Conversational Speech Recognition System

    Full text link
    We describe the 2017 version of Microsoft's conversational speech recognition system, in which we update our 2016 system with recent developments in neural-network-based acoustic and language modeling to further advance the state of the art on the Switchboard speech recognition task. The system adds a CNN-BLSTM acoustic model to the set of model architectures we combined previously, and includes character-based and dialog session aware LSTM language models in rescoring. For system combination we adopt a two-stage approach, whereby subsets of acoustic models are first combined at the senone/frame level, followed by a word-level voting via confusion networks. We also added a confusion network rescoring step after system combination. The resulting system yields a 5.1\% word error rate on the 2000 Switchboard evaluation set

    Pronunciation modeling for Cantonese speech recognition.

    Get PDF
    Kam Patgi.Thesis (M.Phil.)--Chinese University of Hong Kong, 2003.Includes bibliographical references (leaf 103).Abstracts in English and Chinese.Chapter Chapter 1. --- Introduction --- p.1Chapter 1.1 --- Automatic Speech Recognition --- p.1Chapter 1.2 --- Pronunciation Modeling in ASR --- p.2Chapter 1.3 --- Obj ectives of the Thesis --- p.5Chapter 1.4 --- Thesis Outline --- p.5Reference --- p.7Chapter Chapter 2. --- The Cantonese Dialect --- p.9Chapter 2.1 --- Cantonese - A Typical Chinese Dialect --- p.10Chapter 2.1.1 --- Cantonese Phonology --- p.11Chapter 2.1.2 --- Cantonese Phonetics --- p.12Chapter 2.2 --- Pronunciation Variation in Cantonese --- p.13Chapter 2.2.1 --- Phone Change and Sound Change --- p.14Chapter 2.2.2 --- Notation for Different Sound Units --- p.16Chapter 2.3 --- Summary --- p.17Reference --- p.18Chapter Chapter 3. --- Large-Vocabulary Continuous Speech Recognition for Cantonese --- p.19Chapter 3.1 --- Feature Representation of the Speech Signal --- p.20Chapter 3.2 --- Probabilistic Framework of ASR --- p.20Chapter 3.3 --- Hidden Markov Model for Acoustic Modeling --- p.21Chapter 3.4 --- Pronunciation Lexicon --- p.25Chapter 3.5 --- Statistical Language Model --- p.25Chapter 3.6 --- Decoding --- p.26Chapter 3.7 --- The Baseline Cantonese LVCSR System --- p.26Chapter 3.7.1 --- System Architecture --- p.26Chapter 3.7.2 --- Speech Databases --- p.28Chapter 3.8 --- Summary --- p.29Reference --- p.30Chapter Chapter 4. --- Pronunciation Model --- p.32Chapter 4.1 --- Pronunciation Modeling at Different Levels --- p.33Chapter 4.2 --- Phone-level pronunciation model and its Application --- p.35Chapter 4.2.1 --- IF Confusion Matrix (CM) --- p.35Chapter 4.2.2 --- Decision Tree Pronunciation Model (DTPM) --- p.38Chapter 4.2.3 --- Refinement of Confusion Matrix --- p.41Chapter 4.3 --- Summary --- p.43References --- p.44Chapter Chapter 5. --- Pronunciation Modeling at Lexical Level --- p.45Chapter 5.1 --- Construction of PVD --- p.46Chapter 5.2 --- PVD Pruning by Word Unigram --- p.48Chapter 5.3 --- Recognition Experiments --- p.49Chapter 5.3.1 --- Experiment 1 ´ؤPronunciation Modeling in LVCSR --- p.49Chapter 5.3.2 --- Experiment 2 ´ؤ Pronunciation Modeling in Domain Specific task --- p.58Chapter 5.3.3 --- Experiment 3 ´ؤ PVD Pruning by Word Unigram --- p.62Chapter 5.4 --- Summary --- p.63Reference --- p.64Chapter Chapter 6. --- Pronunciation Modeling at Acoustic Model Level --- p.66Chapter 6.1 --- Hierarchy of HMM --- p.67Chapter 6.2 --- Sharing of Mixture Components --- p.68Chapter 6.3 --- Adaptation of Mixture Components --- p.70Chapter 6.4 --- Combination of Mixture Component Sharing and Adaptation --- p.74Chapter 6.5 --- Recognition Experiments --- p.78Chapter 6.6 --- Result Analysis --- p.80Chapter 6.6.1 --- Performance of Sharing Mixture Components --- p.81Chapter 6.6.2 --- Performance of Mixture Component Adaptation --- p.84Chapter 6.7 --- Summary --- p.85Reference --- p.87Chapter Chapter 7. --- Pronunciation Modeling at Decoding Level --- p.88Chapter 7.1 --- Search Process in Cantonese LVCSR --- p.88Chapter 7.2 --- Model-Level Search Space Expansion --- p.90Chapter 7.3 --- State-Level Output Probability Modification --- p.92Chapter 7.4 --- Recognition Experiments --- p.93Chapter 7.4.1 --- Experiment 1 ´ؤModel-Level Search Space Expansion --- p.93Chapter 7.4.2 --- Experiment 2 ´ؤ State-Level Output Probability Modification …… --- p.94Chapter 7.5 --- Summary --- p.96Reference --- p.97Chapter Chapter 8. --- Conclusions and Suggestions for Future Work --- p.98Chapter 8.1 --- Conclusions --- p.98Chapter 8.2 --- Suggestions for Future Work --- p.100Reference --- p.103Appendix I Base Syllable Table --- p.104Appendix II Cantonese Initials and Finals --- p.105Appendix III IF confusion matrix --- p.106Appendix IV Phonetic Question Set --- p.112Appendix V CDDT and PCDT --- p.11

    Joint learning of phonetic units and word pronunciations for ASR

    Get PDF
    Abstract The creation of a pronunciation lexicon remains the most inefficient process in developing an Automatic Speech Recognizer (ASR). In this paper, we propose an unsupervised alternative -requiring no language-specific knowledge -to the conventional manual approach for creating pronunciation dictionaries. We present a hierarchical Bayesian model, which jointly discovers the phonetic inventory and the Letter-to-Sound (L2S) mapping rules in a language using only transcribed data. When tested on a corpus of spontaneous queries, the results demonstrate the superiority of the proposed joint learning scheme over its sequential counterpart, in which the latent phonetic inventory and L2S mappings are learned separately. Furthermore, the recognizers built with the automatically induced lexicon consistently outperform grapheme-based recognizers and even approach the performance of recognition systems trained using conventional supervised procedures

    Automatic speech recognition: from study to practice

    Get PDF
    Today, automatic speech recognition (ASR) is widely used for different purposes such as robotics, multimedia, medical and industrial application. Although many researches have been performed in this field in the past decades, there is still a lot of room to work. In order to start working in this area, complete knowledge of ASR systems as well as their weak points and problems is inevitable. Besides that, practical experience improves the theoretical knowledge understanding in a reliable way. Regarding to these facts, in this master thesis, we have first reviewed the principal structure of the standard HMM-based ASR systems from technical point of view. This includes, feature extraction, acoustic modeling, language modeling and decoding. Then, the most significant challenging points in ASR systems is discussed. These challenging points address different internal components characteristics or external agents which affect the ASR systems performance. Furthermore, we have implemented a Spanish language recognizer using HTK toolkit. Finally, two open research lines according to the studies of different sources in the field of ASR has been suggested for future work

    Articulatory features for conversational speech recognition

    Get PDF

    Mining of Textual Data from the Web for Speech Recognition

    Get PDF
    Prvotním cílem tohoto projektu bylo prostudovat problematiku jazykového modelování pro rozpoznávání řeči a techniky pro získávání textových dat z Webu. Text představuje základní techniky rozpoznávání řeči a detailněji popisuje jazykové modely založené na statistických metodách. Zvláště se práce zabývá kriterii pro vyhodnocení kvality jazykových modelů a systémů pro rozpoznávání řeči. Text dále popisuje modely a techniky dolování dat, zvláště vyhledávání informací. Dále jsou představeny problémy spojené se získávání dat z webu, a v kontrastu s tím je představen vyhledávač Google. Součástí projektu byl návrh a implementace systému pro získávání textu z webu, jehož detailnímu popisu je věnována náležitá pozornost. Nicméně, hlavním cílem práce bylo ověřit, zda data získaná z Webu mohou mít nějaký přínos pro rozpoznávání řeči. Popsané techniky se tak snaží najít optimální způsob, jak data získaná z Webu použít pro zlepšení ukázkových jazykových modelů, ale i modelů nasazených v reálných rozpoznávacích systémech.The preliminary goals of this project were to get familiar with language modeling for speech recognition and techniques for acquisition of text data from the Web. Speech recognition techniques are introduced and statistical language modeling is described in detail. The text also covers mining models and techniques, information retrieval especially. Specific problems of Web mining are discussed and Google search is introduced. Special attention was paid to detailed description of implementation of the text mining system. However, the main goal of this work was to determine, whether the data acquired from the Web can provide some improvement into the recognition systems. The text is describing experiments, which use the retrieved Web data to update sample language models.

    Auditory precision hypothesis-L2: Dimension-specific relationships between auditory processing and second language segmental learning

    Get PDF
    Growing evidence suggests a broad relationship between individual differences in auditory processing ability and the rate and ultimate attainment of language acquisition throughout the lifespan, including post-pubertal second language (L2) speech learning. However, little is known about how the precision of processing of specific auditory dimensions relates to the acquisition of specific L2 segmental contrasts. In the context of 100 late Japanese-English bilinguals with diverse profiles of classroom and immersion experience, the current study set out to investigate the link between the perception of several auditory dimensions (F3 frequency, F2 frequency, and duration) in non-verbal sounds and English [r]-[l] perception and production proficiency. Whereas participants' biographical factors (the presence/absence of immersion) accounted for a large amount of variance in the success of learning this contrast, the outcomes were also tied to their acuity to the most reliable, new auditory cues (F3 variation) and the less reliable but already-familiar cues (F2 variation). This finding suggests that individuals can vary in terms of how they perceive, utilize, and make the most of information conveyed by specific acoustic dimensions. When perceiving more naturalistic spoken input, where speech contrasts can be distinguished via a combination of numerous cues, some can attain a high-level of L2 speech proficiency by using nativelike and/or non-nativelike strategies in a complementary fashion

    Speech Recognition

    Get PDF
    Chapters in the first part of the book cover all the essential speech processing techniques for building robust, automatic speech recognition systems: the representation for speech signals and the methods for speech-features extraction, acoustic and language modeling, efficient algorithms for searching the hypothesis space, and multimodal approaches to speech recognition. The last part of the book is devoted to other speech processing applications that can use the information from automatic speech recognition for speaker identification and tracking, for prosody modeling in emotion-detection systems and in other speech processing applications that are able to operate in real-world environments, like mobile communication services and smart homes

    Characterizing phonetic transformations and fine-grained acoustic differences across dialects

    Get PDF
    Thesis (Ph. D.)--Harvard-MIT Division of Health Sciences and Technology, 2011.Cataloged from PDF version of thesis.Includes bibliographical references (p. 169-175).This thesis is motivated by the gaps between speech science and technology in analyzing dialects. In speech science, investigating phonetic rules is usually manually laborious and time consuming, limiting the amount of data analyzed. Without sufficient data, the analysis could potentially overlook or over-specify certain phonetic rules. On the other hand, in speech technology such as automatic dialect recognition, phonetic rules are rarely modeled explicitly. While many applications do not require such knowledge to obtain good performance, it is beneficial to specifically model pronunciation patterns in certain applications. For example, users of language learning software can benefit from explicit and intuitive feedback from the computer to alter their pronunciation; in forensic phonetics, it is important that results of automated systems are justifiable on phonetic grounds. In this work, we propose a mathematical framework to analyze dialects in terms of (1) phonetic transformations and (2) acoustic differences. The proposed Phonetic based Pronunciation Model (PPM) uses a hidden Markov model to characterize when and how often substitutions, insertions, and deletions occur. In particular, clustering methods are compared to better model deletion transformations. In addition, an acoustic counterpart of PPM, Acoustic-based Pronunciation Model (APM), is proposed to characterize and locate fine-grained acoustic differences such as formant transitions and nasalization across dialects. We used three data sets to empirically compare the proposed models in Arabic and English dialects. Results in automatic dialect recognition demonstrate that the proposed models complement standard baseline systems. Results in pronunciation generation and rule retrieval experiments indicate that the proposed models learn underlying phonetic rules across dialects. Our proposed system postulates pronunciation rules to a phonetician who interprets and refines them to discover new rules or quantify known rules. This can be done on large corpora to develop rules of greater statistical significance than has previously been possible. Potential applications of this work include speaker characterization and recognition, automatic dialect recognition, automatic speech recognition and synthesis, forensic phonetics, language learning or accent training education, and assistive diagnosis tools for speech and voice disorders.by Nancy Fang-Yih Chen.Ph.D
    • …
    corecore