Expressing universal semantics common to all languages is helpful in
understanding the meanings of complex and culture-specific sentences. The
research theme underlying this scenario focuses on learning universal
representations across languages with the usage of massive parallel corpora.
However, due to the sparsity and scarcity of parallel data, there is still a
big challenge in learning authentic ``universals'' for any two languages. In
this paper, we propose EMMA-X: an EM-like Multilingual pre-training Algorithm,
to learn (X)Cross-lingual universals with the aid of excessive multilingual
non-parallel data. EMMA-X unifies the cross-lingual representation learning
task and an extra semantic relation prediction task within an EM framework.
Both the extra semantic classifier and the cross-lingual sentence encoder
approximate the semantic relation of two sentences, and supervise each other
until convergence. To evaluate EMMA-X, we conduct experiments on XRETE, a newly
introduced benchmark containing 12 widely studied cross-lingual tasks that
fully depend on sentence-level representations. Results reveal that EMMA-X
achieves state-of-the-art performance. Further geometric analysis of the built
representation space with three requirements demonstrates the superiority of
EMMA-X over advanced models.Comment: Accepted by NeurIPS 202