From Bilingual to Multilingual Neural Machine Translation by Incremental Training

Abstract

Multilingual Neural Machine Translation approaches are based on the use of task-specific models and the addition of one more language can only be done by retraining the whole system. In this work, we propose a new training schedule that allows the system to scale to more languages without modification of the previous components based on joint training and language-independent encoder/decoder modules allowing for zero-shot translation. This work in progress shows close results to the state-of-the-art in the WMT task.Comment: Accepted paper at ACL 2019 Student Research Workshop. arXiv admin note: substantial text overlap with arXiv:1905.0683

    Similar works