46,936 research outputs found
Learning to Parse and Translate Improves Neural Machine Translation
There has been relatively little attention to incorporating linguistic prior
to neural machine translation. Much of the previous work was further
constrained to considering linguistic prior on the source side. In this paper,
we propose a hybrid model, called NMT+RNNG, that learns to parse and translate
by combining the recurrent neural network grammar into the attention-based
neural machine translation. Our approach encourages the neural machine
translation model to incorporate linguistic prior during training, and lets it
translate on its own afterward. Extensive experiments with four language pairs
show the effectiveness of the proposed NMT+RNNG.Comment: Accepted as a short paper at the 55th Annual Meeting of the
Association for Computational Linguistics (ACL 2017
Does Multimodality Help Human and Machine for Translation and Image Captioning?
This paper presents the systems developed by LIUM and CVC for the WMT16
Multimodal Machine Translation challenge. We explored various comparative
methods, namely phrase-based systems and attentional recurrent neural networks
models trained using monomodal or multimodal data. We also performed a human
evaluation in order to estimate the usefulness of multimodal data for human
machine translation and image description generation. Our systems obtained the
best results for both tasks according to the automatic evaluation metrics BLEU
and METEOR.Comment: 7 pages, 2 figures, v4: Small clarification in section 4 title and
conten
- …