25 research outputs found
Building English-to-Serbian machine translation system for IMDb movie reviews
This paper reports the results of the first experiment dealing with the challenges of building a machine translation system for user-generated content involving a complex South Slavic language. We focus on translation of English IMDb user movie reviews into Serbian, in a low-resource scenario. We explore potentials and limits of (i) phrase-based and neural machine translation systems trained on out-of-domain clean parallel data from news articles (ii) creating additional synthetic in-domain parallel corpus by machine-translating the English IMDb corpus into Serbian. Our main findings are that morphology and syntax are better handled by the neural approach than by the phrase-based approach even in this low-resource mismatched domain scenario, however the situation is different for the lexical aspect, especially for person names. This finding also indicates that in general, machine translation of person names into Slavic languages (especially those which require/allow transcription) should be investigated more systematically
Fine-grained human evaluation of neural versus phrase-based machine translation
We compare three approaches to statistical machine translation (pure
phrase-based, factored phrase-based and neural) by performing a fine-grained
manual evaluation via error annotation of the systems' outputs. The error types
in our annotation are compliant with the multidimensional quality metrics
(MQM), and the annotation is performed by two annotators. Inter-annotator
agreement is high for such a task, and results show that the best performing
system (neural) reduces the errors produced by the worst system (phrase-based)
by 54%.Comment: 12 pages, 2 figures, The Prague Bulletin of Mathematical Linguistic
Quantitative Fine-Grained Human Evaluation of Machine Translation Systems: a Case Study on English to Croatian
This paper presents a quantitative fine-grained manual evaluation approach to
comparing the performance of different machine translation (MT) systems. We
build upon the well-established Multidimensional Quality Metrics (MQM) error
taxonomy and implement a novel method that assesses whether the differences in
performance for MQM error types between different MT systems are statistically
significant. We conduct a case study for English-to-Croatian, a language
direction that involves translating into a morphologically rich language, for
which we compare three MT systems belonging to different paradigms: pure
phrase-based, factored phrase-based and neural. First, we design an
MQM-compliant error taxonomy tailored to the relevant linguistic phenomena of
Slavic languages, which made the annotation process feasible and accurate.
Errors in MT outputs were then annotated by two annotators following this
taxonomy. Subsequently, we carried out a statistical analysis which showed that
the best-performing system (neural) reduces the errors produced by the worst
system (pure phrase-based) by more than half (54\%). Moreover, we conducted an
additional analysis of agreement errors in which we distinguished between short
(phrase-level) and long distance (sentence-level) errors. We discovered that
phrase-based MT approaches are of limited use for long distance agreement
phenomena, for which neural MT was found to be especially effective.Comment: 22 pages, 2 figures, 9 tables, 1 equation. This is a
post-peer-review, pre-copyedit version of an article published in Machine
Translation Journal. The final authenticated version will be available online
at the journal page. arXiv admin note: substantial text overlap with
arXiv:1706.0438
Quantitative Fine-grained Human Evaluation of Machine Translation Systems: a Case Study on English to Croatian
This paper presents a quantitative fine-grained manual evaluation approach to comparing the performance of different machine translation (MT) systems. We build upon the well-established Multidimensional Quality Metrics (MQM) error taxonomy and implement a novel method that assesses whether the differences in performance for MQM error types between different MT systems are statistically significant. We conduct a case study for English-to- Croatian, a language direction that involves translating into a morphologically rich language, for which we compare three MT systems belonging to different paradigms: pure phrase-based, factored phrase-based and neural. First, we design an MQM-compliant error taxonomy tailored to the relevant linguistic phenomena of Slavic languages, which made the annotation process feasible and accurate. Errors in MT outputs were then annotated by two annotators following this taxonomy. Subsequently, we carried out a statistical analysis which showed that the best-performing system (neural) reduces the errors produced by the worst system (pure phrase-based) by more than half (54%). Moreover, we conducted an additional analysis of agreement errors in which we distinguished between short (phrase-level) and long distance (sentence-level) errors. We discovered that phrase-based MT approaches are of limited use for long distance agreement phenomena, for which neural MT was found to be especially effective
Machine translation of user-generated content
The world of social media has undergone huge evolution during the last few years. With the spread of social media and online forums, individual users actively participate in the generation of online content in different languages from all over the world. Sharing of online content has become much easier than before with the advent of popular websites such as Twitter, Facebook etc. Such content is referred to as ‘User-Generated Content’ (UGC). Some examples of UGC are user reviews, customer feedback, tweets etc. In general, UGC is informal and noisy in terms of linguistic norms. Such noise does not create significant problems for human to understand the content, but it can pose challenges for several natural language processing
applications such as parsing, sentiment analysis, machine translation (MT),
etc.
An additional challenge for MT is sparseness of bilingual (translated) parallel UGC corpora. In this research, we explore the general issues in MT of UGC and set some research goals from our findings. One of our main goals is to exploit comparable corpora in order to extract parallel or semantically similar sentences. To accomplish this task, we design a document alignment system to extract semantically similar bilingual document pairs using the bilingual comparable corpora. We then apply strategies to extract parallel or semantically similar sentences from comparable corpora by transforming the document alignment system into a sentence alignment system. We seek to improve the quality of parallel data extraction for UGC translation and assemble the extracted data with the existing human translated resources.
Another objective of this research is to demonstrate the usefulness of MT-based sentiment analysis. However, when using openly available systems such as Google Translate, the translation process may alter the sentiment in the target language. To cope with this phenomenon, we instead build fine-grained sentiment translation models that focus on sentiment preservation in the target language during translation
Neural machine translation for translating into Croatian and Serbian
In this work, we systematically investigate different set-ups for training of neural machine translation (NMT) systems for translation into Croatian and Serbian, two closely related South Slavic languages. We explore English and German as source languages, different sizes and types of training corpora, as well as bilingual and multilingual systems. We also explore translation of English IMDb user movie reviews, a domain/genre where only monolingual data are available. First, our results confirm that multilingual systems with joint target languages perform better. Furthermore, translation performance from English is much better than from German, partly because German is morphologically more complex and partly because the corpus consists mostly of parallel human translations instead of original text and its human translation. The translation from German should be further investigated systematically. For translating user reviews, creating synthetic in-domain parallel data through back- and forward-translation and adding them to a small out-of-domain parallel corpus can yield performance comparable with a system trained on a full out-of-domain corpus. However, it is still not clear what is the optimal size of synthetic in-domain data, especially for forward-translated data where the target language is machine translated. More detailed research including manual evaluation and analysis is needed in this direction
Arabic and English Relative Clauses and Machine Translation Challenges
The study aims at performing an error analysis as well as providing an evaluation of the quality of neural machine translation (NMT) represented by Google Translate when translating relative clauses. The study uses two test suites are composed of sentences that contain relative clauses. The first test suite composes of 108 pair sentences that are translated from English to Arabic whereas the second composes of 72 Arabic sentences that are translated into English. Errors annotation is performed by 6 professional annotators. The study presents a list of the annotated errors divided into accuracy and fluency errors that occur based on MQM. Manual evaluation is also performed by the six professionals along with a BLEU automatic evaluation using the Tilde Me platform. The results show that fluency errors are more frequent than accuracy errors. They also show that the frequency of errors and MT quality when translating from English into Arabic is lower than the frequency of errors and MT quality when translating from Arabic into English is also presented. Based on the performed error analysis and both manual and automatic evaluation, it is pointed out that the gap between MT and professional human translation is still large
On nature and causes of observed MT errors
This work describes analysis of nature and causes of MT errors observed by different evaluators under guidance of different quality criteria: adequacy and comprehension and and a not specified generic mixture of adequacy and fluency. We report results for three language pairs and two domains and eleven MT systems. Our findings indicate that and despite the fact that some of the identified phenomena depend on domain and/or language and the following set of phenomena can be considered as generally challenging for modern MT systems: rephrasing groups of words and translation of ambiguous source words and translating noun phrases and and mistranslations. Furthermore and we show that the quality criterion also has impact on error perception. Our findings indicate that comprehension and adequacy can be assessed simultaneously by different evaluators and so that comprehension and as an important quality criterion and can be included more often in human evaluations