1,428 research outputs found
Mask-Predict: Parallel Decoding of Conditional Masked Language Models
Most machine translation systems generate text autoregressively from left to
right. We, instead, use a masked language modeling objective to train a model
to predict any subset of the target words, conditioned on both the input text
and a partially masked target translation. This approach allows for efficient
iterative decoding, where we first predict all of the target words
non-autoregressively, and then repeatedly mask out and regenerate the subset of
words that the model is least confident about. By applying this strategy for a
constant number of iterations, our model improves state-of-the-art performance
levels for non-autoregressive and parallel decoding translation models by over
4 BLEU on average. It is also able to reach within about 1 BLEU point of a
typical left-to-right transformer model, while decoding significantly faster.Comment: EMNLP 201
A Survey on Non-Autoregressive Generation for Neural Machine Translation and Beyond
Non-autoregressive (NAR) generation, which is first proposed in neural
machine translation (NMT) to speed up inference, has attracted much attention
in both machine learning and natural language processing communities. While NAR
generation can significantly accelerate inference speed for machine
translation, the speedup comes at the cost of sacrificed translation accuracy
compared to its counterpart, auto-regressive (AR) generation. In recent years,
many new models and algorithms have been designed/proposed to bridge the
accuracy gap between NAR generation and AR generation. In this paper, we
conduct a systematic survey with comparisons and discussions of various
non-autoregressive translation (NAT) models from different aspects.
Specifically, we categorize the efforts of NAT into several groups, including
data manipulation, modeling methods, training criterion, decoding algorithms,
and the benefit from pre-trained models. Furthermore, we briefly review other
applications of NAR models beyond machine translation, such as dialogue
generation, text summarization, grammar error correction, semantic parsing,
speech synthesis, and automatic speech recognition. In addition, we also
discuss potential directions for future exploration, including releasing the
dependency of KD, dynamic length prediction, pre-training for NAR, and wider
applications, etc. We hope this survey can help researchers capture the latest
progress in NAR generation, inspire the design of advanced NAR models and
algorithms, and enable industry practitioners to choose appropriate solutions
for their applications. The web page of this survey is at
\url{https://github.com/LitterBrother-Xiao/Overview-of-Non-autoregressive-Applications}.Comment: 25 pages, 11 figures, 4 table
- …