8 research outputs found
Direct Neural Machine Translation with Task-level Mixture of Experts models
Direct neural machine translation (direct NMT) is a type of NMT system that
translates text between two non-English languages. Direct NMT systems often
face limitations due to the scarcity of parallel data between non-English
language pairs. Several approaches have been proposed to address this
limitation, such as multilingual NMT and pivot NMT (translation between two
languages via English). Task-level Mixture of expert models (Task-level MoE),
an inference-efficient variation of Transformer-based models, has shown
promising NMT performance for a large number of language pairs. In Task-level
MoE, different language groups can use different routing strategies to optimize
cross-lingual learning and inference speed. In this work, we examine Task-level
MoE's applicability in direct NMT and propose a series of high-performing
training and evaluation configurations, through which Task-level MoE-based
direct NMT systems outperform bilingual and pivot-based models for a large
number of low and high-resource direct pairs, and translation directions. Our
Task-level MoE with 16 experts outperforms bilingual NMT, Pivot NMT models for
7 language pairs, while pivot-based models still performed better in 9 pairs
and directions