20,220 research outputs found
What Level of Quality can Neural Machine Translation Attain on Literary Text?
Given the rise of a new approach to MT, Neural MT (NMT), and its promising
performance on different text types, we assess the translation quality it can
attain on what is perceived to be the greatest challenge for MT: literary text.
Specifically, we target novels, arguably the most popular type of literary
text. We build a literary-adapted NMT system for the English-to-Catalan
translation direction and evaluate it against a system pertaining to the
previous dominant paradigm in MT: statistical phrase-based MT (PBSMT). To this
end, for the first time we train MT systems, both NMT and PBSMT, on large
amounts of literary text (over 100 million words) and evaluate them on a set of
twelve widely known novels spanning from the the 1920s to the present day.
According to the BLEU automatic evaluation metric, NMT is significantly better
than PBSMT (p < 0.01) on all the novels considered. Overall, NMT results in a
11% relative improvement (3 points absolute) over PBSMT. A complementary human
evaluation on three of the books shows that between 17% and 34% of the
translations, depending on the book, produced by NMT (versus 8% and 20% with
PBSMT) are perceived by native speakers of the target language to be of
equivalent quality to translations produced by a professional human translator.Comment: Chapter for the forthcoming book "Translation Quality Assessment:
From Principles to Practice" (Springer
Mask-Predict: Parallel Decoding of Conditional Masked Language Models
Most machine translation systems generate text autoregressively from left to
right. We, instead, use a masked language modeling objective to train a model
to predict any subset of the target words, conditioned on both the input text
and a partially masked target translation. This approach allows for efficient
iterative decoding, where we first predict all of the target words
non-autoregressively, and then repeatedly mask out and regenerate the subset of
words that the model is least confident about. By applying this strategy for a
constant number of iterations, our model improves state-of-the-art performance
levels for non-autoregressive and parallel decoding translation models by over
4 BLEU on average. It is also able to reach within about 1 BLEU point of a
typical left-to-right transformer model, while decoding significantly faster.Comment: EMNLP 201
Reinforcement Learning for Bandit Neural Machine Translation with Simulated Human Feedback
Machine translation is a natural candidate problem for reinforcement learning
from human feedback: users provide quick, dirty ratings on candidate
translations to guide a system to improve. Yet, current neural machine
translation training focuses on expensive human-generated reference
translations. We describe a reinforcement learning algorithm that improves
neural machine translation systems from simulated human feedback. Our algorithm
combines the advantage actor-critic algorithm (Mnih et al., 2016) with the
attention-based neural encoder-decoder architecture (Luong et al., 2015). This
algorithm (a) is well-designed for problems with a large action space and
delayed rewards, (b) effectively optimizes traditional corpus-level machine
translation metrics, and (c) is robust to skewed, high-variance, granular
feedback modeled after actual human behaviors.Comment: 11 pages, 5 figures, In Proceedings of Empirical Methods in Natural
Language Processing (EMNLP) 201
- …