23 research outputs found
Bilingual distributed word representations from document-aligned comparable data
We propose a new model for learning bilingual word representations from non-parallel document-aligned data. Following the recent advances in word representation learning, our model learns dense real-valued word vectors, that is, bilingual word embeddings (BWEs). Unlike prior work on inducing BWEs which heavily relied on parallel sentence-aligned corpora and/or readily available translation resources such as dictionaries, the article reveals that BWEs may be learned solely on the basis of document-aligned comparable data without any additional lexical resources nor syntactic information. We present a comparison of our approach with previous state-of-the-art models for learning bilingual word representations from comparable data that rely on the framework of multilingual probabilistic topic modeling (MuPTM), as well as with distributional local context-counting models. We demonstrate the utility of the induced BWEs in two semantic tasks: (1) bilingual lexicon extraction, (2) suggesting word translations in context for polysemous words. Our simple yet effective BWE-based models significantly outperform the MuPTM-based and contextcounting representation models from comparable data as well as prior BWE-based models, and acquire the best reported results on both tasks for all three tested language pairs.This work was done while Ivan Vuli c was a postdoctoral researcher at Department of Computer Science, KU Leuven supported by the PDM Kort fellowship (PDMK/14/117). The work was also supported by the SCATE project (IWT-SBO 130041) and the ERC Consolidator Grant LEXICAL: Lexical Acquisition Across Languages (648909)
Analyzing the Limitations of Cross-lingual Word Embedding Mappings
Recent research in cross-lingual word embeddings has almost exclusively
focused on offline methods, which independently train word embeddings in
different languages and map them to a shared space through linear
transformations. While several authors have questioned the underlying
isomorphism assumption, which states that word embeddings in different
languages have approximately the same structure, it is not clear whether this
is an inherent limitation of mapping approaches or a more general issue when
learning cross-lingual embeddings. So as to answer this question, we experiment
with parallel corpora, which allows us to compare offline mapping to an
extension of skip-gram that jointly learns both embedding spaces. We observe
that, under these ideal conditions, joint learning yields to more isomorphic
embeddings, is less sensitive to hubness, and obtains stronger results in
bilingual lexicon induction. We thus conclude that current mapping methods do
have strong limitations, calling for further research to jointly learn
cross-lingual embeddings with a weaker cross-lingual signal.Comment: ACL 201