We formulate and test a technique to use Emergent Communication (EC) with a
pretrained multilingual model to improve on modern Unsupervised NMT systems,
especially for low-resource languages. It has been argued that the currently
dominant paradigm in NLP of pretraining on text-only corpora will not yield
robust natural language understanding systems, and the need for grounded,
goal-oriented, and interactive language learning has been highlighted. In our
approach, we embed a modern multilingual model (mBART, Liu et. al. 2020) into
an EC image-reference game, in which the model is incentivized to use
multilingual generations to accomplish a vision-grounded task, with the
hypothesis that this will align multiple languages to a shared task space. We
present two variants of EC Fine-Tuning (Steinert-Threlkeld et. al. 2022), one
of which outperforms a backtranslation-based baseline in 6/8 translation
settings, and proves especially beneficial for the very low-resource languages
of Nepali and Sinhala