From bilingual to multilingual neural machine translation by incremental training

Abstract

Multilingual Neural Machine Translation approaches are based on the use of task specific models and the addition of one more language can only be done by retraining the whole system. In this work, we propose a new training schedule that allows the system to scale to more languages without modification of the previous components based on joint training and language-independent encoder/decoder modules allowing for zero-shot translation. This work in progress shows close results to state-of-the-art in the WMT task.This work is supported in part by a Google Faculty Research Award. This work is also supported in part by the Spanish Ministerio de Economa y Competitividad, the European Regional Development Fund and the Agencia Estatal de Investigacin, through the postdoctoral senior grant Ramon y Cajal, contract TEC2015-69266-P (MINECO/FEDER,EU) and contract PCIN-2017- 079 (AEI/MINECO).Peer ReviewedPostprint (published version

    Similar works