Large-scale Pretrained Language Models~(LLMs), such as ChatGPT and GPT4, have
shown strong abilities in multilingual translations, without being explicitly
trained on parallel corpora. It is interesting how the LLMs obtain their
ability to carry out translation instructions for different languages. In this
paper, we present a detailed analysis by finetuning a multilingual pretrained
language model, XGLM-7B, to perform multilingual translation following given
instructions. Firstly, we show that the multilingual LLMs have stronger
translation abilities than previously demonstrated. For a certain language
pair, the performance depends on both the language families and the amount of
data used in the pretraining phase. Secondly, we find that LLMs' ability to
carry out translation instructions relies on the understanding of translation
instruction and the alignment among different languages. With proper
enhancement, LLMs could perform the translation task well even for those
language pairs unseen during the instruction tuning phase