3,311 research outputs found

    How speaker tongue and name source language affect the automatic recognition of spoken names

    Get PDF
    In this paper the automatic recognition of person names and geographical names uttered by native and non-native speakers is examined in an experimental set-up. The major aim was to raise our understanding of how well and under which circumstances previously proposed methods of multilingual pronunciation modeling and multilingual acoustic modeling contribute to a better name recognition in a cross-lingual context. To come to a meaningful interpretation of results we have categorized each language according to the amount of exposure a native speaker is expected to have had to this language. After having interpreted our results we have also tried to find an answer to the question of how much further improvement one might be able to attain with a more advanced pronunciation modeling technique which we plan to develop

    Essential Speech and Language Technology for Dutch: Results by the STEVIN-programme

    Get PDF
    Computational Linguistics; Germanic Languages; Artificial Intelligence (incl. Robotics); Computing Methodologie

    Incorporating Pronunciation Variation into Different Strategies of Term Transliteration

    Get PDF
    Term transliteration addresses the problem of converting terms in one language into their phonetic equivalents in the other language via spoken form. It is especially concerned with proper nouns, such as personal names, place names and organization names. Pronunciation variation refers to pronunciation ambiguity frequently encountered in spoken language, which has a serious impact on term transliteration. More than one transliteration variants can be generated by an out-of-vocabulary term due to different kinds of pronunciation variations. It is important to take this issue into account when dealing with term transliteration. Several models, which take pronunciation variation into consideration, are proposed for term transliteration in this paper. They describe transliteration from various viewpoints and utilize the relationships trained from extracted transliterated-term pairs. An experiment in applying the proposed models to term transliteration was conducted and evaluated. The experimental results show promise. These proposed models are not only applicable to term transliteration, but also are helpful in indexing and retrieving spoken document retrieval

    Strategies for Handling Out-of-Vocabulary Words in Automatic Speech Recognition

    Get PDF
    Nowadays, most ASR (automatic speech recognition) systems deployed in industry are closed-vocabulary systems, meaning we have a limited vocabulary of words the system can recognize, and where pronunciations are provided to the system. Words out of this vocabulary are called out-of-vocabulary (OOV) words, for which either pronunciations or both spellings and pronunciations are not known to the system. The basic motivations of developing strategies to handle OOV words are: First, in the training phase, missing or wrong pronunciations of words in training data results in poor acoustic models. Second, in the test phase, words out of the vocabulary cannot be recognized at all, and mis-recognition of OOV words may affect recognition performance of its in-vocabulary neighbors as well. Therefore, this dissertation is dedicated to exploring strategies of handling OOV words in closed-vocabulary ASR. First, we investigate dealing with OOV words in ASR training data, by introducing an acoustic-data driven pronunciation learning framework using a likelihood-reduction based criterion for selecting pronunciation candidates from multiple sources, i.e. standard grapheme-to-phoneme algorithms (G2P) and phonetic decoding, in a greedy fashion. This framework effectively expands a small hand-crafted pronunciation lexicon to cover OOV words, for which the learned pronunciations have higher quality than approaches using G2P alone or using other baseline pruning criteria. Furthermore, applying the proposed framework to generate alternative pronunciations for in-vocabulary (IV) words improves both recognition performance on relevant words and overall acoustic model performance. Second, we investigate dealing with OOV words in ASR test data, i.e. OOV detection and recovery. We first conduct a comparative study of a hybrid lexical model (HLM) approach for OOV detection, and several baseline approaches, with the conclusion that the HLM approach outperforms others in both OOV detection and first pass OOV recovery performance. Next, we introduce a grammar-decoding framework for efficient second pass OOV recovery, showing that with properly designed schemes of estimating OOV unigram probabilities, the framework significantly improves OOV recovery and overall decoding performance compared to first pass decoding. Finally we propose an open-vocabulary word-level recurrent neural network language model (RNNLM) re-scoring framework, making it possible to re-score lattices containing recovered OOVs using a single word-level RNNLM, that was ignorant of OOVs when it was trained. Above all, the whole OOV recovery pipeline shows the potential of a highly efficient open-vocabulary word-level ASR decoding framework, tightly integrated into a standard WFST decoding pipeline

    Automatic Scaling of Text for Training Second Language Reading Comprehension

    Get PDF
    For children learning their first language, reading is one of the most effective ways to acquire new vocabulary. Studies link students who read more with larger and more complex vocabularies. For second language learners, there is a substantial barrier to reading. Even the books written for early first language readers assume a base vocabulary of nearly 7000 word families and a nuanced understanding of grammar. This project will look at ways that technology can help second language learners overcome this high barrier to entry, and the effectiveness of learning through reading for adults acquiring a foreign language. Through the implementation of Dokusha, an automatic graded reader generator for Japanese, this project will explore how advancements in natural language processing can be used to automatically simplify text for extensive reading in Japanese as a foreign language

    Articulatory features for conversational speech recognition

    Get PDF

    Acoustic Modelling for Under-Resourced Languages

    Get PDF
    Automatic speech recognition systems have so far been developed only for very few languages out of the 4,000-7,000 existing ones. In this thesis we examine methods to rapidly create acoustic models in new, possibly under-resourced languages, in a time and cost effective manner. For this we examine the use of multilingual models, the application of articulatory features across languages, and the automatic discovery of word-like units in unwritten languages

    Multimedia information technology and the annotation of video

    Get PDF
    The state of the art in multimedia information technology has not progressed to the point where a single solution is available to meet all reasonable needs of documentalists and users of video archives. In general, we do not have an optimistic view of the usability of new technology in this domain, but digitization and digital power can be expected to cause a small revolution in the area of video archiving. The volume of data leads to two views of the future: on the pessimistic side, overload of data will cause lack of annotation capacity, and on the optimistic side, there will be enough data from which to learn selected concepts that can be deployed to support automatic annotation. At the threshold of this interesting era, we make an attempt to describe the state of the art in technology. We sample the progress in text, sound, and image processing, as well as in machine learning

    A Contrastive Study Between RP And GA Segmental Features

    Get PDF
    A CONTRASTIVE STUDY BETWEEN RP AND GA SEGMENTAL FEATURES Aulianisa Netasya Salam Faculty of Teacher Training and Education, Muhammadiyah University of Surakarta [email protected] Dr. Maryadi, M.A Faculty of Teacher Training and Education, Muhammadiyah University of Surakarta [email protected] This research is a contrastive study aimed to describe the similarities and the differences between RP and GA segmental features. This research used descriptive-qualitative method which collected the data from the YouTube video. The study found that the similarities between RP and GA segmental sounds in initial, medial, and final positions are [ɪ], [ə], [eɪ], [ͻɪ], [p], [b], [t], [d], [tʃ], [θ], [g], [f], [v], [s], [z], [ʃ], [m], [n], [l]. The similar sounds found in initial and medial positions are [ӕ], [tʃ], [dȝ], [ð], [h], [w], [j]; in medial and final positions are [aɪ], [k], [ȝ], [ղ]; in initial position is [r] and in medial positions: [ʊ], [ʌ], [ɛ]. Then the differences sound between RP and GA segmental features have been found in initial and medial positions are [ͻ], [ɑ:]; in medial and final positions are [ɪə], [əʊ], in initial position is [ʌ], [eə] whereas in medial position is [ɒ], [i:], [u:], [ͻ:], [ʊə], [t]
    corecore