Zero-shot cross-lingual transfer is when a multilingual model is trained to
perform a task in one language and then is applied to another language.
Although the zero-shot cross-lingual transfer approach has achieved success in
various classification tasks, its performance on natural language generation
tasks falls short in quality and sometimes outputs an incorrect language. In
our study, we show that the fine-tuning process learns language invariant
representations, which is beneficial for classification tasks but harmful for
generation tasks. Motivated by this, we propose a simple method to regularize
the model from learning language invariant representations and a method to
select model checkpoints without a development set in the target language, both
resulting in better generation quality. Experiments on three semantically
diverse generation tasks show that our method reduces the accidental
translation problem by 68% and improves the ROUGE-L score by 1.5 on average.Comment: Findings of ACL 202