2 research outputs found
Reusing Weights in Subword-aware Neural Language Models
We propose several ways of reusing subword embeddings and other weights in
subword-aware neural language models. The proposed techniques do not benefit a
competitive character-aware model, but some of them improve the performance of
syllable- and morpheme-aware models while showing significant reductions in
model sizes. We discover a simple hands-on principle: in a multi-layer input
embedding model, layers should be tied consecutively bottom-up if reused at
output. Our best morpheme-aware model with properly reused weights beats the
competitive word-level model by a large margin across multiple languages and
has 20%-87% fewer parameters.Comment: accepted to NAACL 201
Learning Contextualised Cross-lingual Word Embeddings and Alignments for Extremely Low-Resource Languages Using Parallel Corpora
We propose a new approach for learning contextualised cross-lingual word
embeddings based on a small parallel corpus (e.g. a few hundred sentence
pairs). Our method obtains word embeddings via an LSTM encoder-decoder model
that simultaneously translates and reconstructs an input sentence. Through
sharing model parameters among different languages, our model jointly trains
the word embeddings in a common cross-lingual space. We also propose to combine
word and subword embeddings to make use of orthographic similarities across
different languages. We base our experiments on real-world data from endangered
languages, namely Yongning Na, Shipibo-Konibo, and Griko. Our experiments on
bilingual lexicon induction and word alignment tasks show that our model
outperforms existing methods by a large margin for most language pairs. These
results demonstrate that, contrary to common belief, an encoder-decoder
translation model is beneficial for learning cross-lingual representations even
in extremely low-resource conditions. Furthermore, our model also works well on
high-resource conditions, achieving state-of-the-art performance on a
German-English word-alignment task.Comment: 16 pages, accepted at the 1st Workshop on Multilingual Representation
Learnin