13 research outputs found
The WMT'18 Morpheval test suites for English-Czech, English-German, English-Finnish and Turkish-English
Peer reviewe
Why don't people use character-level machine translation?
We present a literature and empirical survey that critically assesses the
state of the art in character-level modeling for machine translation (MT).
Despite evidence in the literature that character-level systems are comparable
with subword systems, they are virtually never used in competitive setups in
WMT competitions. We empirically show that even with recent modeling
innovations in character-level natural language processing, character-level MT
systems still struggle to match their subword-based counterparts.
Character-level MT systems show neither better domain robustness, nor better
morphological generalization, despite being often so motivated. However, we are
able to show robustness towards source side noise and that translation quality
does not degrade with increasing beam size at decoding time.Comment: 16 pages, 4 figures; Findings of ACL 2022, camera-read
On the Importance of Word Boundaries in Character-level Neural Machine Translation
Neural Machine Translation (NMT) models generally perform translation using a fixed-size lexical vocabulary, which is an important bottleneck on their generalization capability and overall translation quality. The standard approach to overcome this limitation is to segment words into subword units, typically using some external tools with arbitrary heuristics, resulting in vocabulary units not optimized for the translation task. Recent studies have shown that the same approach can be extended to perform NMT directly at the level of characters, which can deliver translation accuracy on-par with subword-based models, on the other hand, this requires relatively deeper networks. In this paper, we propose a more computationally-efficient solution for character-level NMT which implements a hierarchical decoding architecture where translations are subsequently generated at the level of words and characters. We evaluate different methods for open-vocabulary NMT in the machine translation task from English into five languages with distinct morphological typology, and show that the hierarchical decoding model can reach higher translation accuracy than the subword-level NMT model using significantly fewer parameters, while demonstrating better capacity in learning longer-distance contextual and grammatical dependencies than the standard character-level NMT model
Findings of the 2018 Conference on Machine Translation (WMT18)
This paper presents the results of the premier
shared task organized alongside the Confer-
ence on Machine Translation (WMT) 2018.
Participants were asked to build machine
translation systems for any of 7 language pairs
in both directions, to be evaluated on a test set
of news stories. The main metric for this task
is human judgment of translation quality. This
year, we also opened up the task to additional
test suites to probe specific aspects of transla-
tion
Fine-grained Human Evaluation of Transformer and Recurrent Approaches to Neural Machine Translation for English-to-Chinese
This research presents a fine-grained human evaluation to compare the
Transformer and recurrent approaches to neural machine translation (MT), on the
translation direction English-to-Chinese. To this end, we develop an error
taxonomy compliant with the Multidimensional Quality Metrics (MQM) framework
that is customised to the relevant phenomena of this translation direction. We
then conduct an error annotation using this customised error taxonomy on the
output of state-of-the-art recurrent- and Transformer-based MT systems on a
subset of WMT2019's news test set. The resulting annotation shows that,
compared to the best recurrent system, the best Transformer system results in a
31% reduction of the total number of errors and it produced significantly less
errors in 10 out of 22 error categories. We also note that two of the systems
evaluated do not produce any error for a category that was relevant for this
translation direction prior to the advent of NMT systems: Chinese classifiers.Comment: Accepted at the 22nd Annual Conference of the European Association
for Machine Translation (EAMT 2020
Linguistic evaluation of German-English Machine Translation using a Test Suite
We present the results of the application of a grammatical test suite for
GermanEnglish MT on the systems submitted at WMT19, with a
detailed analysis for 107 phenomena organized in 14 categories. The systems
still translate wrong one out of four test items in average. Low performance is
indicated for idioms, modals, pseudo-clefts, multi-word expressions and verb
valency. When compared to last year, there has been a improvement of function
words, non-verbal agreement and punctuation. More detailed conclusions about
particular systems and phenomena are also presented
The WMT'18 Morpheval test suites for English-Czech, English-German, English-Finnish and Turkish-English
| openaire: EC/H2020/780069/EU//MeMADProgress in the quality of machine translation output calls for new automatic evaluation procedures and metrics. In this paper, we extend the Morpheval protocol introduced by Burlot and Yvon (2017) for the English-toCzech and English-to-Latvian translation directions to three additional language pairs, and report its use to analyze the results of WMT 2018âs participants for these language pairs. Considering additional, typologically varied source and target languages also enables us to draw some generalizations regarding this morphology-oriented evaluation procedurePeer reviewe
The WMT'18 Morpheval test suites for English-Czech, English-German, English-Finnish and Turkish-English
| openaire: EC/H2020/780069/EU//MeMADProgress in the quality of machine translation output calls for new automatic evaluation procedures and metrics. In this paper, we extend the Morpheval protocol introduced by Burlot and Yvon (2017) for the English-toCzech and English-to-Latvian translation directions to three additional language pairs, and report its use to analyze the results of WMT 2018âs participants for these language pairs. Considering additional, typologically varied source and target languages also enables us to draw some generalizations regarding this morphology-oriented evaluation procedurePeer reviewe