219 research outputs found

    Proceedings of the 4th Workshop on Representation Learning for NLP (RepL4NLP-2019)

    Get PDF

    Meta-Embedding as Auxiliary Task Regularization.

    Get PDF
    Word embeddings have been shown to benefit from ensambling several word embedding sources, often carried out using straightforward mathematical operations over the set of word vectors. More recently, self-supervised learning has been used to find a lower-dimensional representation, similar in size to the individual word embeddings within the ensemble. However, these methods do not use the available manually labeled datasets that are often used solely for the purpose of evaluation. We propose to reconstruct an ensemble of word embeddings as an auxiliary task that regularises a main task while both tasks share the learned meta-embedding layer. We carry out intrinsic evaluation (6 word similarity datasets and 3 analogy datasets) and extrinsic evaluation (4 downstream tasks). For intrinsic task evaluation, supervision comes from various labeled word similarity datasets. Our experimental results show that the performance is improved for all word similarity datasets when compared to self-supervised learning methods with a mean increase of 11.3311.33 in Spearman correlation. Specifically, the proposed method shows the best performance in 4 out of 6 of word similarity datasets when using a cosine reconstruction loss and Brier's word similarity loss. Moreover, improvements are also made when performing word meta-embedding reconstruction in sequence tagging and sentence meta-embedding for sentence classification

    Together We Make Sense -- Learning Meta-Sense Embeddings from Pretrained Static Sense Embeddings

    Full text link
    Sense embedding learning methods learn multiple vectors for a given ambiguous word, corresponding to its different word senses. For this purpose, different methods have been proposed in prior work on sense embedding learning that use different sense inventories, sense-tagged corpora and learning methods. However, not all existing sense embeddings cover all senses of ambiguous words equally well due to the discrepancies in their training resources. To address this problem, we propose the first-ever meta-sense embedding method -- Neighbour Preserving Meta-Sense Embeddings, which learns meta-sense embeddings by combining multiple independently trained source sense embeddings such that the sense neighbourhoods computed from the source embeddings are preserved in the meta-embedding space. Our proposed method can combine source sense embeddings that cover different sets of word senses. Experimental results on Word Sense Disambiguation (WSD) and Word-in-Context (WiC) tasks show that the proposed meta-sense embedding method consistently outperforms several competitive baselines.Comment: Accepted to Findings of ACL 202

    Autoencoding Improves Pre-trained Word Embeddings.

    Get PDF
    • …
    corecore