1,751 research outputs found
Non-Autoregressive Neural Machine Translation with Enhanced Decoder Input
Non-autoregressive translation (NAT) models, which remove the dependence on
previous target tokens from the inputs of the decoder, achieve significantly
inference speedup but at the cost of inferior accuracy compared to
autoregressive translation (AT) models. Previous work shows that the quality of
the inputs of the decoder is important and largely impacts the model accuracy.
In this paper, we propose two methods to enhance the decoder inputs so as to
improve NAT models. The first one directly leverages a phrase table generated
by conventional SMT approaches to translate source tokens to target tokens,
which are then fed into the decoder as inputs. The second one transforms
source-side word embeddings to target-side word embeddings through
sentence-level alignment and word-level adversary learning, and then feeds the
transformed word embeddings into the decoder as inputs. Experimental results
show our method largely outperforms the NAT baseline~\citep{gu2017non} by
BLEU scores on WMT14 English-German task and BLEU scores on WMT16
English-Romanian task.Comment: AAAI 201
Non-Autoregressive Machine Translation with Auxiliary Regularization
As a new neural machine translation approach, Non-Autoregressive machine
Translation (NAT) has attracted attention recently due to its high efficiency
in inference. However, the high efficiency has come at the cost of not
capturing the sequential dependency on the target side of translation, which
causes NAT to suffer from two kinds of translation errors: 1) repeated
translations (due to indistinguishable adjacent decoder hidden states), and 2)
incomplete translations (due to incomplete transfer of source side information
via the decoder hidden states).
In this paper, we propose to address these two problems by improving the
quality of decoder hidden representations via two auxiliary regularization
terms in the training process of an NAT model. First, to make the hidden states
more distinguishable, we regularize the similarity between consecutive hidden
states based on the corresponding target tokens. Second, to force the hidden
states to contain all the information in the source sentence, we leverage the
dual nature of translation tasks (e.g., English to German and German to
English) and minimize a backward reconstruction error to ensure that the hidden
states of the NAT decoder are able to recover the source side sentence.
Extensive experiments conducted on several benchmark datasets show that both
regularization strategies are effective and can alleviate the issues of
repeated translations and incomplete translations in NAT models. The accuracy
of NAT models is therefore improved significantly over the state-of-the-art NAT
models with even better efficiency for inference.Comment: AAAI 201
Fine-Tuning by Curriculum Learning for Non-Autoregressive Neural Machine Translation
Non-autoregressive translation (NAT) models remove the dependence on previous
target tokens and generate all target tokens in parallel, resulting in
significant inference speedup but at the cost of inferior translation accuracy
compared to autoregressive translation (AT) models. Considering that AT models
have higher accuracy and are easier to train than NAT models, and both of them
share the same model configurations, a natural idea to improve the accuracy of
NAT models is to transfer a well-trained AT model to an NAT model through
fine-tuning. However, since AT and NAT models differ greatly in training
strategy, straightforward fine-tuning does not work well. In this work, we
introduce curriculum learning into fine-tuning for NAT. Specifically, we design
a curriculum in the fine-tuning process to progressively switch the training
from autoregressive generation to non-autoregressive generation. Experiments on
four benchmark translation datasets show that the proposed method achieves good
improvement (more than BLEU score) over previous NAT baselines in terms of
translation accuracy, and greatly speed up (more than times) the inference
process over AT baselines.Comment: AAAI 202
A Survey on Non-Autoregressive Generation for Neural Machine Translation and Beyond
Non-autoregressive (NAR) generation, which is first proposed in neural
machine translation (NMT) to speed up inference, has attracted much attention
in both machine learning and natural language processing communities. While NAR
generation can significantly accelerate inference speed for machine
translation, the speedup comes at the cost of sacrificed translation accuracy
compared to its counterpart, auto-regressive (AR) generation. In recent years,
many new models and algorithms have been designed/proposed to bridge the
accuracy gap between NAR generation and AR generation. In this paper, we
conduct a systematic survey with comparisons and discussions of various
non-autoregressive translation (NAT) models from different aspects.
Specifically, we categorize the efforts of NAT into several groups, including
data manipulation, modeling methods, training criterion, decoding algorithms,
and the benefit from pre-trained models. Furthermore, we briefly review other
applications of NAR models beyond machine translation, such as dialogue
generation, text summarization, grammar error correction, semantic parsing,
speech synthesis, and automatic speech recognition. In addition, we also
discuss potential directions for future exploration, including releasing the
dependency of KD, dynamic length prediction, pre-training for NAR, and wider
applications, etc. We hope this survey can help researchers capture the latest
progress in NAR generation, inspire the design of advanced NAR models and
algorithms, and enable industry practitioners to choose appropriate solutions
for their applications. The web page of this survey is at
\url{https://github.com/LitterBrother-Xiao/Overview-of-Non-autoregressive-Applications}.Comment: 25 pages, 11 figures, 4 table
- …