19 research outputs found

    Embedding Words and Senses Together via Joint Knowledge-Enhanced Training

    Get PDF
    Word embeddings are widely used in Nat-ural Language Processing, mainly due totheir success in capturing semantic infor-mation from massive corpora. However,their creation process does not allow thedifferent meanings of a word to be auto-matically separated, as it conflates theminto a single vector. We address this issueby proposing a new model which learnsword and sense embeddings jointly. Ourmodel exploits large corpora and knowl-edge from semantic networks in order toproduce a unified vector space of wordand sense embeddings. We evaluate themain features of our approach both qual-itatively and quantitatively in a variety oftasks, highlighting the advantages of theproposed method in comparison to state-of-the-art word- and sense-based models

    Natural language understanding: instructions for (Present and Future) use

    Get PDF
    In this paper I look at Natural Language Understanding, an area of Natural Language Processing aimed at making sense of text, through the lens of a visionary future: what do we expect a machine should be able to understand? and what are the key dimensions that require the attention of researchers to make this dream come true

    One Homonym per Translation

    Full text link
    The study of homonymy is vital to resolving fundamental problems in lexical semantics. In this paper, we propose four hypotheses that characterize the unique behavior of homonyms in the context of translations, discourses, collocations, and sense clusters. We present a new annotated homonym resource that allows us to test our hypotheses on existing WSD resources. The results of the experiments provide strong empirical evidence for the hypotheses. This study represents a step towards a computational method for distinguishing between homonymy and polysemy, and constructing a definitive inventory of coarse-grained senses.Comment: 8 pages, including reference

    From Word to Sense Embeddings: A Survey on Vector Representations of Meaning

    Get PDF
    Over the past years, distributed semantic representations have proved to be effective and flexible keepers of prior knowledge to be integrated into downstream applications. This survey focuses on the representation of meaning. We start from the theoretical background behind word vector space models and highlight one of their major limitations: the meaning conflation deficiency, which arises from representing a word with all its possible meanings as a single vector. Then, we explain how this deficiency can be addressed through a transition from the word level to the more fine-grained level of word senses (in its broader acceptation) as a method for modelling unambiguous lexical meaning. We present a comprehensive overview of the wide range of techniques in the two main branches of sense representation, i.e., unsupervised and knowledge-based. Finally, this survey covers the main evaluation procedures and applications for this type of representation, and provides an analysis of four of its important aspects: interpretability, sense granularity, adaptability to different domains and compositionality.Comment: 46 pages, 8 figures. Published in Journal of Artificial Intelligence Researc

    Régularisation spatiale de représentations distribuées de mots

    Get PDF
    Stimulée par l’usage intensif des téléphones mobiles, l’exploitation conjointe des don-nées textuelles et des données spatiales présentes dans les objets spatio-textuels (p. ex. tweets)est devenue la pierre angulaire à de nombreuses applications comme la recherche de lieux d’attraction. Du point de vue scientifique, ces tâches reposent de façon critique sur la représentation d’objets spatiaux et la définition de fonctions d’appariement entre ces objets. Dans cet article,nous nous intéressons au problème de représentation de ces objets. Plus spécifiquement, confortés par le succès des représentations distribuées basées sur les approches neuronales, nous proposons de régulariser les représentations distribuées de mots (c.-à-d. plongements lexicaux ou word embeddings), pouvant être combinées pour construire des représentations d’objets,grâce à leurs répartitions spatiales. L’objectif sous-jacent est de révéler d’éventuelles relations sémantiques locales entre mots ainsi que la multiplicité des sens d’un même mot. Les expérimentations basées sur une tâche de recherche d’information qui consiste à retourner le lieu physique faisant l’objet (sujet) d’un géo-texte montrent que l’intégration notre méthode de régularisation spatiale de représentations distribuées de mots dans un modèle d’appariement de base permet d’obtenir des améliorations significatives par rapport aux modèles de référence
    corecore