209,444 research outputs found
Learning Semantic Representations for the Phrase Translation Model
This paper presents a novel semantic-based phrase translation model. A pair
of source and target phrases are projected into continuous-valued vector
representations in a low-dimensional latent semantic space, where their
translation score is computed by the distance between the pair in this new
space. The projection is performed by a multi-layer neural network whose
weights are learned on parallel training data. The learning is aimed to
directly optimize the quality of end-to-end machine translation results.
Experimental evaluation has been performed on two Europarl translation tasks,
English-French and German-English. The results show that the new semantic-based
phrase translation model significantly improves the performance of a
state-of-the-art phrase-based statistical machine translation sys-tem, leading
to a gain of 0.7-1.0 BLEU points
Are BLEU and Meaning Representation in Opposition?
One of possible ways of obtaining continuous-space sentence representations
is by training neural machine translation (NMT) systems. The recent attention
mechanism however removes the single point in the neural network from which the
source sentence representation can be extracted. We propose several variations
of the attentive NMT architecture bringing this meeting point back. Empirical
evaluation suggests that the better the translation quality, the worse the
learned sentence representations serve in a wide range of classification and
similarity tasks.Comment: ACL 2018; 10 pages + 2 page supplementar
Are BLEU and Meaning Representation in Opposition?
One of possible ways of obtaining continuous-space sentence representations
is by training neural machine translation (NMT) systems. The recent attention
mechanism however removes the single point in the neural network from which the
source sentence representation can be extracted. We propose several variations
of the attentive NMT architecture bringing this meeting point back. Empirical
evaluation suggests that the better the translation quality, the worse the
learned sentence representations serve in a wide range of classification and
similarity tasks.Comment: ACL 2018; 10 pages + 2 page supplementar
- …