128 research outputs found

    Robust large-scale EBMT with marker-based segmentation

    Get PDF
    Previous work on marker-based EBMT [Gough & Way, 2003, Way & Gough, 2004] suffered from problems such as data-sparseness and disparity between the training and test data. We have developed a large-scale robust EBMT system. In a comparison with the systems listed in [Somers, 2003], ours is the third largest EBMT system and certainly the largest English-French EBMT system. Previous work used the on-line MT system Logomedia to translate source language material as a means of populating the system’s database where bitexts were unavailable. We derive our sententially aligned strings from a Sun Translation Memory (TM) and limit the integration of Logomedia to the derivation of our word-level lexicon. We also use Logomedia to provide a baseline comparison for our system and observe that we outperform Logomedia and previous marker-based EBMT systems in a number of tests

    MATREX: the DCU MT system for WMT 2010

    Get PDF
    This paper describes the DCU machine translation system in the evaluation campaign of the Joint Fifth Workshop on Statistical Machine Translation and Metrics in ACL-2010. We describe the modular design of our multi-engine machine translation (MT) system with particular focus on the components used in this participation. We participated in the English–Spanish and English–Czech translation tasks, in which we employed our multiengine architecture to translate. We also participated in the system combination task which was carried out by the MBR decoder and confusion network decoder

    MaTrEx: the DCU machine translation system for IWSLT 2007

    Get PDF
    In this paper, we give a description of the machine translation system developed at DCU that was used for our second participation in the evaluation campaign of the International Workshop on Spoken Language Translation (IWSLT 2007). In this participation, we focus on some new methods to improve system quality. Specifically, we try our word packing technique for different language pairs, we smooth our translation tables with out-of-domain word translations for the Arabic–English and Chinese–English tasks in order to solve the high number of out of vocabulary items, and finally we deploy a translation-based model for case and punctuation restoration

    Mitigating the problems of SMT using EBMT

    Get PDF
    Statistical Machine Translation (SMT) typically has difficulties with less-resourced languages even with homogeneous data. In this thesis we address the application of Example-Based Machine Translation (EBMT) methods to overcome some of these difficulties. We adopt three alternative approaches to tackle these problems focusing on two poorly-resourced translation tasks (English–Bangla and English–Turkish). First, we adopt a runtime approach to EBMT using proportional analogy. In addition to the translation task, we have tested the EBMT system using proportional analogy for named entity transliteration. In the second attempt, we use a compiled approach to EBMT. Finally, we present a novel way of integrating Translation Memory (TM) into an EBMT system. We discuss the development of these three different EBMT systems and the experiments we have performed. In addition, we present an approach to augment the output quality by strategically combining EBMT systems and SMT systems. The hybrid system shows significant improvement for different language pairs. Runtime EBMT systems in general have significant time complexity issues especially for large example-base. We explore two methods to address this issue in our system by making the system scalable at runtime for a large example-base (English–French). First, we use a heuristic-based approach. Secondly we use an IR-based indexing technique to speed up the time-consuming matching procedure of the EBMT system. The index-based matching procedure substantially improves run-time speed without affecting translation quality

    Data-driven machine translation for sign languages

    Get PDF
    This thesis explores the application of data-driven machine translation (MT) to sign languages (SLs). The provision of an SL MT system can facilitate communication between Deaf and hearing people by translating information into the native and preferred language of the individual. We begin with an introduction to SLs, focussing on Irish Sign Language - the native language of the Deaf in Ireland. We describe their linguistics and mechanics including similarities and differences with spoken languages. Given the lack of a formalised written form of these languages, an outline of annotation formats is discussed as well as the issue of data collection. We summarise previous approaches to SL MT, highlighting the pros and cons of each approach. Initial experiments in the novel area of example-based MT for SLs are discussed and an overview of the problems that arise when automatically translating these manual-visual languages is given. Following this we detail our data-driven approach, examining the MT system used and modifications made for the treatment of SLs and their annotation. Through sets of automatically evaluated experiments in both language directions, we consider the merits of data-driven MT for SLs and outline the mainstream evaluation metrics used. To complete the translation into SLs, we discuss the addition and manual evaluation of a signing avatar for real SL output

    EUSMT: incorporating linguistic information to SMT for a morphologically rich language. Its use in SMT-RBMT-EBMT hybridation

    Get PDF
    148 p.: graf.This thesis is defined in the framework of machine translation for Basque. Having developed a Rule-Based Machine Translation (RBMT) system for Basque in the IXA group (Mayor, 2007), we decided to tackle the Statistical Machine Translation (SMT) approach and experiment on how we could adapt it to the peculiarities of the Basque language. First, we analyzed the impact of the agglutinative nature of Basque and the best way to deal with it. In order to deal with the problems presented above, we have split up Basque words into the lemma and some tags which represent the morphological information expressed by the inflection. By dividing each Basque word in this way, we aim to reduce the sparseness produced by the agglutinative nature of Basque and the small amount of training data. Similarly, we also studied the differences in word order between Spanish and Basque, examining different techniques for dealing with them. we confirm the weakness of the basic SMT in dealing with great word order differences in the source and target languages. Distance-based reordering, which is the technique used by the baseline system, does not have enough information to properly handle great word order differences, so any of the techniques tested in this work (based on both statistics and manually generated rules) outperforms the baseline. Once we had obtained a more accurate SMT system, we started the first attempts to combine different MT systems into a hybrid one that would allow us to get the best of the different paradigms. The hybridization attempts carried out in this PhD dissertation are preliminaries, but, even so, this work can help us to determine the ongoing steps. This thesis is defined in the framework of machine translation for Basque. Having developed a Rule-Based Machine Translation (RBMT) system for Basque in the IXA group (Mayor, 2007), we decided to tackle the Statistical Machine Translation (SMT) approach and experiment on how we could adapt it to the peculiarities of the Basque language. First, we analyzed the impact of the agglutinative nature of Basque and the best way to deal with it. In order to deal with the problems presented above, we have split up Basque words into the lemma and some tags which represent the morphological information expressed by the inflection. By dividing each Basque word in this way, we aim to reduce the sparseness produced by the agglutinative nature of Basque and the small amount of training data. Similarly, we also studied the differences in word order between Spanish and Basque, examining different techniques for dealing with them. we confirm the weakness of the basic SMT in dealing with great word order differences in the source and target languages. Distance-based reordering, which is the technique used by the baseline system, does not have enough information to properly handle great word order differences, so any of the techniques tested in this work (based on both statistics and manually generated rules) outperforms the baseline. Once we had obtained a more accurate SMT system, we started the first attempts to combine different MT systems into a hybrid one that would allow us to get the best of the different paradigms. The hybridization attempts carried out in this PhD dissertation are preliminaries, but, even so, this work can help us to determine the ongoing steps.Eusko Jaurlaritzaren ikertzaileak prestatzeko beka batekin (BFI05.326)eginda

    低資源言語としてのベンガル語に対するオントロジーに基づく機械翻訳

    Get PDF
    In this research we propose ontology based Machine Translation with the help of WordNetand UNL Ontology. Example-Based Machine Translation (EBMT) for low resource language,like Bengali, has low-coverage issues. Due to the lack of parallel corpus, it has highprobability of handling unknown words. We have implemented an EBMT system for lowresourcelanguage pair. The EBMT architecture use chunk-string templates (CSTs) andunknown word translation mechanism. CSTs consist of a chunk in source-language, a stringin target-language, and word alignment information. CSTs are prepared automatically fromaligned parallel corpus and WordNet by using English chunker. For unknown wordtranslation, we used WordNet hypernym tree and English-Bengali dictionary. Proposedsystem first tries to find semantically related English words from WordNet for the unknownword. From these related words, we choose the semantically closest related word whoseBangla translation exists in English-Bangla dictionary. If no Bangla translation exists, thesystem uses IPA-based-transliteration. For proper nouns, the system uses Akkhortransliteration mechanism. CSTs improved the wide-coverage by 57 points and quality by48.81 points in human evaluation. Currently 64.29% of the test-set translations by the systemwere acceptable. The combined solutions of CSTs and unknown words generated 67.85%acceptable translations from the test-set. Unknown words mechanism improved translationquality by 3.56 points in human evaluation. This research also proposed the way to autogenerate the explanation of each concept using the semantic backgrounds provided by UNLOntology. These explanations are useful for improving translation quality of unknown words.Ontology Based Machine Translation for Bengali as Low-resource Language.本研究では、WordNet と UNL オントロジーを用いた、オントロジーに基づく機械翻訳を提案する。ベンガル語のような低資源言語 (low-resource language)に対しては、具体例に基づく機械翻訳 (EBMT)は、あまり有効ではない。パラレル・コーパスの欠如のために、多数の未知語を扱わなければならなくなるためである。我々は、低資源言語間の EBMT システムを実装した。実装したEBMT アーキテクチャでは、chunk-string templates (CSTs)と、未知語翻訳メカニズムを用いている。CST は、起点言語のチャンク、目的言語の文字列と、単語アラメント情報から成る。CST は、英語チャンカーを用いて、アラインメント済みのパラレル・コーパスとWordNet から、自動的に生成される。最初に、起点言語のチャンクが OpenNLP チャンカーを用いて自動生成される。そして、初期CST が、各起点言語のチャンクに対して生成され、すべての目的文に対するCSTアラインメントがパラレル・コーパスを用いて生成される。その後、システムは、単語アラインメント情報を用いて、CSTの組合せを生成する。最後に、WordNet を用いて、広い適用範囲を得るためにCST を一般化する。未知語翻訳に対しては、WordNet hypernym treeと、英語・ベンガル語辞書を用いる。提案システムは、最初に、未知語に対して、WordNet から意味的に関連した英単語を発見しようと試みる。これらの関連語から、英語・ベンガル語辞書にベンガル語の翻訳が存在する、意味的に最も近い語を選ぶ。もし、ベンガル語の翻訳が存在しなければ、システムはIPA-based翻訳を行う。固有名詞に対しては、システムは、Akkhor 翻訳メカニズムを用いる。CST は57 ポイントの広い適用範囲を持つように改善され、その際の人間による訳文の評価も 48.81 ポイントを得た。現在、システムのよって、64.29%のテストケースの翻訳が行える。未知語メカニズムは、人間に評価において 3.56 ポイント、翻訳の質を改善した。CST と未知語の組合せよる解法は、テストケースにおいて、67.85%の許容可能な翻訳を生成した。また、本研究では、UNL オントロジーが提供するsemantic background を用いて、各概念に対する説明を自動生成する方法も提案した。このシステムに対する入力は、1つのユニバーサル・ワード(UN)であり、システムの出力はその UN の英語や日本語による説明文である。与えられたUN に対して、システムは、最初に、SemanticWordMap を発見するが、それは、1つの特定のUN に対する、UNL オントロジーからのすべての直接的、間接的参照関係を含む。したがって、このステップの入力は、1つのUN であり、出力はWordMapグラフである。次のステップで、変換規則を用いて、WordMap グラフをUNL に変換する。この変換規則は、ユーザの要求に応じて、“From UWs only”や “From UNL Ontology”と指定できる。したがって、このステップの入力はWordMap グラフであり、出力はUNL表現である。最終ステップでは、UNL DeConverter を用いてUNL 表現を変換し、自然言語を用いて記述する。これらの表現は、未知語に対する翻訳の質の向上に有効であることがわかった。電気通信大学201

    A novel dependency-based evaluation metric for machine translation

    Get PDF
    Automatic evaluation measures such as BLEU (Papineni et al. (2002)) and NIST (Doddington (2002)) are indispensable in the development of Machine Translation (MT) systems, because they allow MT developers to conduct frequent, fast, and cost-effective evaluations of their evolving translation models. However, most of the automatic evaluation metrics rely on a comparison of word strings, measuring only the surface similarity of the candidate and reference translations, and will penalize any divergence. In effect,a candidate translation expressing the source meaning accurately and fluently will be given a low score if the lexical and syntactic choices it contains, even though perfectly legitimate, are not present in at least one of the references. Necessarily, this score would differ from a much more favourable human judgment that such a translation would receive. This thesis presents a method that automatically evaluates the quality of translation based on the labelled dependency structure of the sentence, rather than on its surface form. Dependencies abstract away from the some of the particulars of the surface string realization and provide a more "normalized" representation of (some) syntactic variants of a given sentence. The translation and reference files are analyzed by a treebank-based, probabilistic Lexical-Functional Grammar (LFG) parser (Cahill et al. (2004)) for English, which produces a set of dependency triples for each input. The translation set is compared to the reference set, and the number of matches is calculated, giving the precision, recall, and f-score for that particular translation. The use of WordNet synonyms and partial matching during the evaluation process allows for adequate treatment of lexical variation, while employing a number of best parses helps neutralize the noise introduced during the parsing stage. The dependency-based method is compared against a number of other popular MT evaluation metrics, including BLEU, NIST, GTM (Turian et al. (2003)), TER (Snover et al. (2006)), and METEOR (Banerjee and Lavie (2005)), in terms of segment- and system-level correlations with human judgments of fluency and adequacy. We also examine whether it shows bias towards statistical MT models. The comparison of the dependency-based method with other evaluation metrics is then extended to languages other than English: French, German, Spanish, and Japanese, where we apply our method to dependencies generated by Microsoft's NLPWin analyzer (Corston-Oliver and Dolan (1999); Heidorn (2000)) as well as, in the case of the Spanish data, those produced by the treebank-based, probabilistic LFG parser of Chrupa la and van Genabith (2006a,b)
    corecore