659 research outputs found
Fine-Tuning by Curriculum Learning for Non-Autoregressive Neural Machine Translation
Non-autoregressive translation (NAT) models remove the dependence on previous
target tokens and generate all target tokens in parallel, resulting in
significant inference speedup but at the cost of inferior translation accuracy
compared to autoregressive translation (AT) models. Considering that AT models
have higher accuracy and are easier to train than NAT models, and both of them
share the same model configurations, a natural idea to improve the accuracy of
NAT models is to transfer a well-trained AT model to an NAT model through
fine-tuning. However, since AT and NAT models differ greatly in training
strategy, straightforward fine-tuning does not work well. In this work, we
introduce curriculum learning into fine-tuning for NAT. Specifically, we design
a curriculum in the fine-tuning process to progressively switch the training
from autoregressive generation to non-autoregressive generation. Experiments on
four benchmark translation datasets show that the proposed method achieves good
improvement (more than BLEU score) over previous NAT baselines in terms of
translation accuracy, and greatly speed up (more than times) the inference
process over AT baselines.Comment: AAAI 202
A Survey on Non-Autoregressive Generation for Neural Machine Translation and Beyond
Non-autoregressive (NAR) generation, which is first proposed in neural
machine translation (NMT) to speed up inference, has attracted much attention
in both machine learning and natural language processing communities. While NAR
generation can significantly accelerate inference speed for machine
translation, the speedup comes at the cost of sacrificed translation accuracy
compared to its counterpart, auto-regressive (AR) generation. In recent years,
many new models and algorithms have been designed/proposed to bridge the
accuracy gap between NAR generation and AR generation. In this paper, we
conduct a systematic survey with comparisons and discussions of various
non-autoregressive translation (NAT) models from different aspects.
Specifically, we categorize the efforts of NAT into several groups, including
data manipulation, modeling methods, training criterion, decoding algorithms,
and the benefit from pre-trained models. Furthermore, we briefly review other
applications of NAR models beyond machine translation, such as dialogue
generation, text summarization, grammar error correction, semantic parsing,
speech synthesis, and automatic speech recognition. In addition, we also
discuss potential directions for future exploration, including releasing the
dependency of KD, dynamic length prediction, pre-training for NAR, and wider
applications, etc. We hope this survey can help researchers capture the latest
progress in NAR generation, inspire the design of advanced NAR models and
algorithms, and enable industry practitioners to choose appropriate solutions
for their applications. The web page of this survey is at
\url{https://github.com/LitterBrother-Xiao/Overview-of-Non-autoregressive-Applications}.Comment: 25 pages, 11 figures, 4 table
CTC-based Non-autoregressive Speech Translation
Combining end-to-end speech translation (ST) and non-autoregressive (NAR)
generation is promising in language and speech processing for their advantages
of less error propagation and low latency. In this paper, we investigate the
potential of connectionist temporal classification (CTC) for non-autoregressive
speech translation (NAST). In particular, we develop a model consisting of two
encoders that are guided by CTC to predict the source and target texts,
respectively. Introducing CTC into NAST on both language sides has obvious
challenges: 1) the conditional independent generation somewhat breaks the
interdependency among tokens, and 2) the monotonic alignment assumption in
standard CTC does not hold in translation tasks. In response, we develop a
prediction-aware encoding approach and a cross-layer attention approach to
address these issues. We also use curriculum learning to improve convergence of
training. Experiments on the MuST-C ST benchmarks show that our NAST model
achieves an average BLEU score of 29.5 with a speed-up of 5.67, which
is comparable to the autoregressive counterpart and even outperforms the
previous best result of 0.9 BLEU points.Comment: ACL 2023 Main Conferenc
Selective Knowledge Distillation for Non-Autoregressive Neural Machine Translation
Benefiting from the sequence-level knowledge distillation, the
Non-Autoregressive Transformer (NAT) achieves great success in neural machine
translation tasks. However, existing knowledge distillation has side effects,
such as propagating errors from the teacher to NAT students, which may limit
further improvements of NAT models and are rarely discussed in existing
research. In this paper, we introduce selective knowledge distillation by
introducing an NAT evaluator to select NAT-friendly targets that are of high
quality and easy to learn. In addition, we introduce a simple yet effective
progressive distillation method to boost NAT performance. Experiment results on
multiple WMT language directions and several representative NAT models show
that our approach can realize a flexible trade-off between the quality and
complexity of training data for NAT models, achieving strong performances.
Further analysis shows that distilling only 5% of the raw translations can help
an NAT outperform its counterpart trained on raw data by about 2.4 BLEU
BIOptimus: Pre-training an Optimal Biomedical Language Model with Curriculum Learning for Named Entity Recognition
Using language models (LMs) pre-trained in a self-supervised setting on large
corpora and then fine-tuning for a downstream task has helped to deal with the
problem of limited label data for supervised learning tasks such as Named
Entity Recognition (NER). Recent research in biomedical language processing has
offered a number of biomedical LMs pre-trained using different methods and
techniques that advance results on many BioNLP tasks, including NER. However,
there is still a lack of a comprehensive comparison of pre-training approaches
that would work more optimally in the biomedical domain. This paper aims to
investigate different pre-training methods, such as pre-training the biomedical
LM from scratch and pre-training it in a continued fashion. We compare existing
methods with our proposed pre-training method of initializing weights for new
tokens by distilling existing weights from the BERT model inside the context
where the tokens were found. The method helps to speed up the pre-training
stage and improve performance on NER. In addition, we compare how masking rate,
corruption strategy, and masking strategies impact the performance of the
biomedical LM. Finally, using the insights from our experiments, we introduce a
new biomedical LM (BIOptimus), which is pre-trained using Curriculum Learning
(CL) and contextualized weight distillation method. Our model sets new states
of the art on several biomedical Named Entity Recognition (NER) tasks. We
release our code and all pre-trained model
Recommended from our members
Domain adaptation for neural machine translation
The development of deep learning techniques has allowed Neural Machine Translation (NMT) models to become extremely powerful, given sufficient training data and training time. However, such translation models struggle when translating text of a specific domain. A domain may consist of text on a well-defined topic, or text of unknown provenance with an identifiable vocabulary distribution, or language with some other stylometric feature. While NMT models can achieve good translation performance on domain-specific data via simple tuning on a representative training corpus, such data-centric approaches have negative side-effects. These include over-fitting, brittleness, and `catastrophic forgetting' of previous training examples.
In this thesis we instead explore more robust approaches to domain adaptation for NMT. We consider the case where a system is adapted to a specified domain of interest, but may also need to accommodate new language, or domain-mismatched sentences. We explore techniques relating to data selection and curriculum, model parameter adaptation procedure, and inference procedure. We show that iterative fine-tuning can achieve strong performance over multiple related domains, and that Elastic Weight Consolidation can be used to mitigate catastrophic forgetting in NMT domain adaptation across multiple sequential domains. We develop a robust variant of Minimum Risk Training which allows more beneficial use of small, highly domain-specific tuning sets than simple cross-entropy fine-tuning, and can mitigate exposure bias resulting from domain over-fitting. We extend Bayesian Interpolation inference schemes to Neural Machine Translation, allowing adaptive weighting of NMT ensembles to translate text from an unknown domain.
Finally we demonstrate the benefit of multi-domain adaptation approaches for other lines of NMT research. We show that NMT systems using multiple forms of data representation can benefit from multi-domain inference approaches. We also demonstrate a series of domain adaptation approaches to mitigating the effects of gender bias in machine translation
- …