104 research outputs found
Learning to Parse and Translate Improves Neural Machine Translation
There has been relatively little attention to incorporating linguistic prior
to neural machine translation. Much of the previous work was further
constrained to considering linguistic prior on the source side. In this paper,
we propose a hybrid model, called NMT+RNNG, that learns to parse and translate
by combining the recurrent neural network grammar into the attention-based
neural machine translation. Our approach encourages the neural machine
translation model to incorporate linguistic prior during training, and lets it
translate on its own afterward. Extensive experiments with four language pairs
show the effectiveness of the proposed NMT+RNNG.Comment: Accepted as a short paper at the 55th Annual Meeting of the
Association for Computational Linguistics (ACL 2017
Domain Adaptation for Neural Networks by Parameter Augmentation
We propose a simple domain adaptation method for neural networks in a
supervised setting. Supervised domain adaptation is a way of improving the
generalization performance on the target domain by using the source domain
dataset, assuming that both of the datasets are labeled. Recently, recurrent
neural networks have been shown to be successful on a variety of NLP tasks such
as caption generation; however, the existing domain adaptation techniques are
limited to (1) tune the model parameters by the target dataset after the
training by the source dataset, or (2) design the network to have dual output,
one for the source domain and the other for the target domain. Reformulating
the idea of the domain adaptation technique proposed by Daume (2007), we
propose a simple domain adaptation method, which can be applied to neural
networks trained with a cross-entropy loss. On captioning datasets, we show
performance improvements over other domain adaptation methods.Comment: 9 page. To appear in the first ACL Workshop on Representation
Learning for NL
- …