18,017 research outputs found
Sequence to Sequence Mixture Model for Diverse Machine Translation
Sequence to sequence (SEQ2SEQ) models often lack diversity in their generated
translations. This can be attributed to the limitation of SEQ2SEQ models in
capturing lexical and syntactic variations in a parallel corpus resulting from
different styles, genres, topics, or ambiguity of the translation process. In
this paper, we develop a novel sequence to sequence mixture (S2SMIX) model that
improves both translation diversity and quality by adopting a committee of
specialized translation models rather than a single translation model. Each
mixture component selects its own training dataset via optimization of the
marginal loglikelihood, which leads to a soft clustering of the parallel
corpus. Experiments on four language pairs demonstrate the superiority of our
mixture model compared to a SEQ2SEQ baseline with standard or diversity-boosted
beam search. Our mixture model uses negligible additional parameters and incurs
no extra computation cost during decoding.Comment: 11 pages, 5 figures, accepted to CoNLL201
Corpus Augmentation by Sentence Segmentation for Low-Resource Neural Machine Translation
Neural Machine Translation (NMT) has been proven to achieve impressive
results. The NMT system translation results depend strongly on the size and
quality of parallel corpora. Nevertheless, for many language pairs, no
rich-resource parallel corpora exist. As described in this paper, we propose a
corpus augmentation method by segmenting long sentences in a corpus using
back-translation and generating pseudo-parallel sentence pairs. The experiment
results of the Japanese-Chinese and Chinese-Japanese translation with
Japanese-Chinese scientific paper excerpt corpus (ASPEC-JC) show that the
method improves translation performance.Comment: 4 pages. The version before Applied. Science
- …