1,468 research outputs found
Ontology-Aware Token Embeddings for Prepositional Phrase Attachment
Type-level word embeddings use the same set of parameters to represent all
instances of a word regardless of its context, ignoring the inherent lexical
ambiguity in language. Instead, we embed semantic concepts (or synsets) as
defined in WordNet and represent a word token in a particular context by
estimating a distribution over relevant semantic concepts. We use the new,
context-sensitive embeddings in a model for predicting prepositional phrase(PP)
attachments and jointly learn the concept embeddings and model parameters. We
show that using context-sensitive embeddings improves the accuracy of the PP
attachment model by 5.4% absolute points, which amounts to a 34.4% relative
reduction in errors.Comment: ACL 201
A Mixture Model for Learning Multi-Sense Word Embeddings
Word embeddings are now a standard technique for inducing meaning
representations for words. For getting good representations, it is important to
take into account different senses of a word. In this paper, we propose a
mixture model for learning multi-sense word embeddings. Our model generalizes
the previous works in that it allows to induce different weights of different
senses of a word. The experimental results show that our model outperforms
previous models on standard evaluation tasks.Comment: *SEM 201
MUSE: Modularizing Unsupervised Sense Embeddings
This paper proposes to address the word sense ambiguity issue in an
unsupervised manner, where word sense representations are learned along a word
sense selection mechanism given contexts. Prior work focused on designing a
single model to deliver both mechanisms, and thus suffered from either
coarse-grained representation learning or inefficient sense selection. The
proposed modular approach, MUSE, implements flexible modules to optimize
distinct mechanisms, achieving the first purely sense-level representation
learning system with linear-time sense selection. We leverage reinforcement
learning to enable joint training on the proposed modules, and introduce
various exploration techniques on sense selection for better robustness. The
experiments on benchmark data show that the proposed approach achieves the
state-of-the-art performance on synonym selection as well as on contextual word
similarities in terms of MaxSimC
Embedding Words and Senses Together via Joint Knowledge-Enhanced Training
Word embeddings are widely used in Nat-ural Language Processing, mainly due totheir success in capturing semantic infor-mation from massive corpora. However,their creation process does not allow thedifferent meanings of a word to be auto-matically separated, as it conflates theminto a single vector. We address this issueby proposing a new model which learnsword and sense embeddings jointly. Ourmodel exploits large corpora and knowl-edge from semantic networks in order toproduce a unified vector space of wordand sense embeddings. We evaluate themain features of our approach both qual-itatively and quantitatively in a variety oftasks, highlighting the advantages of theproposed method in comparison to state-of-the-art word- and sense-based models
From Word to Sense Embeddings: A Survey on Vector Representations of Meaning
Over the past years, distributed semantic representations have proved to be
effective and flexible keepers of prior knowledge to be integrated into
downstream applications. This survey focuses on the representation of meaning.
We start from the theoretical background behind word vector space models and
highlight one of their major limitations: the meaning conflation deficiency,
which arises from representing a word with all its possible meanings as a
single vector. Then, we explain how this deficiency can be addressed through a
transition from the word level to the more fine-grained level of word senses
(in its broader acceptation) as a method for modelling unambiguous lexical
meaning. We present a comprehensive overview of the wide range of techniques in
the two main branches of sense representation, i.e., unsupervised and
knowledge-based. Finally, this survey covers the main evaluation procedures and
applications for this type of representation, and provides an analysis of four
of its important aspects: interpretability, sense granularity, adaptability to
different domains and compositionality.Comment: 46 pages, 8 figures. Published in Journal of Artificial Intelligence
Researc
- …