16 research outputs found

    A Comparison of Different Machine Transliteration Models

    Full text link
    Machine transliteration is a method for automatically converting words in one language into phonetically equivalent ones in another language. Machine transliteration plays an important role in natural language applications such as information retrieval and machine translation, especially for handling proper nouns and technical terms. Four machine transliteration models -- grapheme-based transliteration model, phoneme-based transliteration model, hybrid transliteration model, and correspondence-based transliteration model -- have been proposed by several researchers. To date, however, there has been little research on a framework in which multiple transliteration models can operate simultaneously. Furthermore, there has been no comparison of the four models within the same framework and using the same data. We addressed these problems by 1) modeling the four models within the same framework, 2) comparing them under the same conditions, and 3) developing a way to improve machine transliteration through this comparison. Our comparison showed that the hybrid and correspondence-based models were the most effective and that the four models can be used in a complementary manner to improve machine transliteration performance

    Incorporating Pronunciation Variation into Different Strategies of Term Transliteration

    Get PDF
    Term transliteration addresses the problem of converting terms in one language into their phonetic equivalents in the other language via spoken form. It is especially concerned with proper nouns, such as personal names, place names and organization names. Pronunciation variation refers to pronunciation ambiguity frequently encountered in spoken language, which has a serious impact on term transliteration. More than one transliteration variants can be generated by an out-of-vocabulary term due to different kinds of pronunciation variations. It is important to take this issue into account when dealing with term transliteration. Several models, which take pronunciation variation into consideration, are proposed for term transliteration in this paper. They describe transliteration from various viewpoints and utilize the relationships trained from extracted transliterated-term pairs. An experiment in applying the proposed models to term transliteration was conducted and evaluated. The experimental results show promise. These proposed models are not only applicable to term transliteration, but also are helpful in indexing and retrieving spoken document retrieval

    Phoneme-based statistical transliteration of foreign names for OOV problem.

    Get PDF
    Gao Wei.Thesis (M.Phil.)--Chinese University of Hong Kong, 2004.Includes bibliographical references (leaves 79-82).Abstracts in English and Chinese.Abstract --- p.iAcknowledgement --- p.iiiBibliographic Notes --- p.vChapter 1 --- Introduction --- p.1Chapter 1.1 --- What is Transliteration? --- p.1Chapter 1.2 --- Existing Problems --- p.2Chapter 1.3 --- Objectives --- p.4Chapter 1.4 --- Outline --- p.4Chapter 2 --- Background --- p.6Chapter 2.1 --- Source-channel Model --- p.6Chapter 2.2 --- Transliteration for English-Chinese --- p.8Chapter 2.2.1 --- Rule-based Approach --- p.8Chapter 2.2.2 --- Similarity-based Framework --- p.8Chapter 2.2.3 --- Direct Semi-Statistical Approach --- p.9Chapter 2.2.4 --- Source-channel-based Approach --- p.11Chapter 2.3 --- Chapter Summary --- p.14Chapter 3 --- Transliteration Baseline --- p.15Chapter 3.1 --- Transliteration Using IBM SMT --- p.15Chapter 3.1.1 --- Introduction --- p.15Chapter 3.1.2 --- GIZA++ for Transliteration Modeling --- p.16Chapter 3.1.3 --- CMU-Cambridge Toolkits for Language Modeling --- p.21Chapter 3.1.4 --- Re Write Decoder for Decoding --- p.21Chapter 3.2 --- Limitations of IBM SMT --- p.22Chapter 3.3 --- Experiments Using IBM SMT --- p.25Chapter 3.3.1 --- Data Preparation --- p.25Chapter 3.3.2 --- Performance Measurement --- p.27Chapter 3.3.3 --- Experimental Results --- p.27Chapter 3.4 --- Chapter Summary --- p.28Chapter 4 --- Direct Transliteration Modeling --- p.29Chapter 4.1 --- Soundness of the Direct Model一Direct-1 --- p.30Chapter 4.2 --- Alignment of Phoneme Chunks --- p.31Chapter 4.3 --- Transliteration Model Training --- p.33Chapter 4.3.1 --- EM Training for Symbol-mappings --- p.33Chapter 4.3.2 --- WFST for Phonetic Transition --- p.36Chapter 4.3.3 --- Issues for Incorrect Syllables --- p.36Chapter 4.4 --- Language Model Training --- p.36Chapter 4.5 --- Search Algorithm --- p.39Chapter 4.6 --- Experimental Results --- p.41Chapter 4.6.1 --- Experiment I: C.A. Distribution --- p.41Chapter 4.6.2 --- Experiment II: Top-n Accuracy --- p.41Chapter 4.6.3 --- Experiment III: Comparisons with the Baseline --- p.43Chapter 4.6.4 --- Experiment IV: Influence of m Candidates --- p.43Chapter 4.7 --- Discussions --- p.43Chapter 4.8 --- Chapter Summary --- p.46Chapter 5 --- Improving Direct Transliteration --- p.47Chapter 5.1 --- Improved Direct Model´ؤDirect-2 --- p.47Chapter 5.1.1 --- Enlightenment from Source-Channel --- p.47Chapter 5.1.2 --- Using Contextual Features --- p.48Chapter 5.1.3 --- Estimation Based on MaxEnt --- p.49Chapter 5.1.4 --- Features for Transliteration --- p.51Chapter 5.2 --- Direct-2 Model Training --- p.53Chapter 5.2.1 --- Procedure and Results --- p.53Chapter 5.2.2 --- Discussions --- p.53Chapter 5.3 --- Refining the Model Direct-2 --- p.55Chapter 5.3.1 --- Refinement Solutions --- p.55Chapter 5.3.2 --- Direct-2R Model Training --- p.56Chapter 5.4 --- Evaluation --- p.57Chapter 5.4.1 --- Search Algorithm --- p.57Chapter 5.4.2 --- Direct Transliteration Models vs. Baseline --- p.59Chapter 5.4.3 --- Direct-2 vs. Direct-2R --- p.63Chapter 5.4.4 --- Experiments on Direct-2R --- p.65Chapter 5.5 --- Chapter Summary --- p.71Chapter 6 --- Conclusions --- p.72Chapter 6.1 --- Thesis Summary --- p.72Chapter 6.2 --- Cross Language Applications --- p.73Chapter 6.3 --- Future Work and Directions --- p.74Chapter A --- IPA-ARPABET Symbol Mapping Table --- p.77Bibliography --- p.8

    Applying dynamic Bayesian networks in transliteration detection and generation

    Get PDF
    Peter Nabende promoveert op methoden die programma’s voor automatisch vertalen kunnen verbeteren. Hij onderzocht twee systemen voor het genereren en vergelijken van transcripties: een DBN-model (Dynamische Bayesiaanse Netwerken) waarin Pair Hidden Markovmodellen zijn geïmplementeerd en een DBN-model dat op transductie is gebaseerd. Nabende onderzocht het effect van verschillende DBN-parameters op de kwaliteit van de geproduceerde transcripties. Voor de evaluatie van de DBN-modellen gebruikte hij standaard dataverzamelingen van elf taalparen: Engels-Arabisch, Engels-Bengaals, Engels-Chinees, Engels-Duits, Engels-Frans, Engels-Hindi, Engels-Kannada, Engels-Nederlands, Engels-Russisch, Engels-Tamil en Engels-Thai. Tijdens het onderzoek probeerde hij om verschillende modellen te combineren. Dat bleek een goed resultaat op te leveren

    Head-Driven Phrase Structure Grammar

    Get PDF
    Head-Driven Phrase Structure Grammar (HPSG) is a constraint-based or declarative approach to linguistic knowledge, which analyses all descriptive levels (phonology, morphology, syntax, semantics, pragmatics) with feature value pairs, structure sharing, and relational constraints. In syntax it assumes that expressions have a single relatively simple constituent structure. This volume provides a state-of-the-art introduction to the framework. Various chapters discuss basic assumptions and formal foundations, describe the evolution of the framework, and go into the details of the main syntactic phenomena. Further chapters are devoted to non-syntactic levels of description. The book also considers related fields and research areas (gesture, sign languages, computational linguistics) and includes chapters comparing HPSG with other frameworks (Lexical Functional Grammar, Categorial Grammar, Construction Grammar, Dependency Grammar, and Minimalism)

    Head-Driven Phrase Structure Grammar

    Get PDF
    Head-Driven Phrase Structure Grammar (HPSG) is a constraint-based or declarative approach to linguistic knowledge, which analyses all descriptive levels (phonology, morphology, syntax, semantics, pragmatics) with feature value pairs, structure sharing, and relational constraints. In syntax it assumes that expressions have a single relatively simple constituent structure. This volume provides a state-of-the-art introduction to the framework. Various chapters discuss basic assumptions and formal foundations, describe the evolution of the framework, and go into the details of the main syntactic phenomena. Further chapters are devoted to non-syntactic levels of description. The book also considers related fields and research areas (gesture, sign languages, computational linguistics) and includes chapters comparing HPSG with other frameworks (Lexical Functional Grammar, Categorial Grammar, Construction Grammar, Dependency Grammar, and Minimalism)

    The Arabic (Re)dubbing of Wordplay in Disney Animated Films

    Get PDF
    Although audiovisual translation (AVT) has received considerable attention in recent years, evidence suggests that there is a paucity of empirical research carried out on the dubbing of wordplay in the Arabophone countries. This piece of research sets to identify, describe and assess the most common translation techniques adopted by translators when dubbing English-language animated films into Arabic. The focus is on the special case of dubbing Disney animated films into Egyptian Arabic (EA) and their subsequent redubbing into Modern Standard Arabic (MSA), during the 1975-2015 period. The ultimate goal is to ascertain the similarities as well as the differences that set the two versions apart, particularly when it comes to the transfer of wordplay. To reach this objective, the methodological approach adopted for this study is a corpus of instances of wordplay that combines a quantitative phase, which has the advantage of identifying correlations between the types of wordplay and particular translation techniques and results and is then followed by a qualitative analysis that further probes the results and determines the different factors that contribute to the way wordplay is translated. The analysis reveals that, in their attempt to render this type of punning humour, in both Arabic dubbed versions, Arabic translators resort to a variety of translation techniques, namely, loan, direct translation, explication, paraphrase, substitution and omission. The examination of the data shows that achieving a humorous effect in the target dialogue is the top priority and driving factor influencing most of the strategies activated in the process of dubbing wordplay into EA. Dissimilarly, there is a noticeable lower amount of puns crossing over from the original films to the MSA dubbed versions, highlighting the fact that the approach generally taken by the dubbing teams seems to give priority to the denotative, informative dimension rather than the socio-pragmatic one. By shedding light on the intricacies of dubbing, it is hoped that this study would contribute to the advancement of knowledge in the translation of wordplay in the Arabophone countries and, more specifically, in the field of dubbing children’s programmes
    corecore