12 research outputs found

    Semantic vector representations of senses, concepts and entities and their applications in natural language processing

    Get PDF
    Representation learning lies at the core of Artificial Intelligence (AI) and Natural Language Processing (NLP). Most recent research has focused on develop representations at the word level. In particular, the representation of words in a vector space has been viewed as one of the most important successes of lexical semantics and NLP in recent years. The generalization power and flexibility of these representations have enabled their integration into a wide variety of text-based applications, where they have proved extremely beneficial. However, these representations are hampered by an important limitation, as they are unable to model different meanings of the same word. In order to deal with this issue, in this thesis we analyze and develop flexible semantic representations of meanings, i.e. senses, concepts and entities. This finer distinction enables us to model semantic information at a deeper level, which in turn is essential for dealing with ambiguity. In addition, we view these (vector) representations as a connecting bridge between lexical resources and textual data, encoding knowledge from both sources. We argue that these sense-level representations, similarly to the importance of word embeddings, constitute a first step for seamlessly integrating explicit knowledge into NLP applications, while focusing on the deeper sense level. Its use does not only aim at solving the inherent lexical ambiguity of language, but also represents a first step to the integration of background knowledge into NLP applications. Multilinguality is another key feature of these representations, as we explore the construction language-independent and multilingual techniques that can be applied to arbitrary languages, and also across languages. We propose simple unsupervised and supervised frameworks which make use of these vector representations for word sense disambiguation, a key application in natural language understanding, and other downstream applications such as text categorization and sentiment analysis. Given the nature of the vectors, we also investigate their effectiveness for improving and enriching knowledge bases, by reducing the sense granularity of their sense inventories and extending them with domain labels, hypernyms and collocations

    Harnessing sense-level information for semantically augmented knowledge extraction

    Get PDF
    Nowadays, building accurate computational models for the semantics of language lies at the very core of Natural Language Processing and Artificial Intelligence. A first and foremost step in this respect consists in moving from word-based to sense-based approaches, in which operating explicitly at the level of word senses enables a model to produce more accurate and unambiguous results. At the same time, word senses create a bridge towards structured lexico-semantic resources, where the vast amount of available machine-readable information can help overcome the shortage of annotated data in many languages and domains of knowledge. This latter phenomenon, known as the knowledge acquisition bottlneck, is a crucial problem that hampers the development of large-scale, data-driven approaches for many Natural Language Processing tasks, especially when lexical semantics is directly involved. One of these tasks is Information Extraction, where an effective model has to cope with data sparsity, as well as with lexical ambiguity that can arise at the level of both arguments and relational phrases. Even in more recent Information Extraction approaches where semantics is implicitly modeled, these issues have not yet been addressed in their entirety. On the other hand, however, having access to explicit sense-level information is a very demanding task on its own, which can rarely be performed with high accuracy on a large scale. With this in mind, in ths thesis we will tackle a two-fold objective: our first focus will be on studying fully automatic approaches to obtain high-quality sense-level information from textual corpora; then, we will investigate in depth where and how such sense-level information has the potential to enhance the extraction of knowledge from open text. In the first part of this work, we will explore three different disambiguation scenar- ios (semi-structured text, parallel text, and definitional text) and devise automatic disambiguation strategies that are not only capable of scaling to different corpus sizes and different languages, but that actually take advantage of a multilingual and/or heterogeneous setting to improve and refine their performance. As a result, we will obtain three sense-annotated resources that, when tested experimentally with a baseline system in a series of downstream semantic tasks (i.e. Word Sense Disam- biguation, Entity Linking, Semantic Similarity), show very competitive performances on standard benchmarks against both manual and semi-automatic competitors. In the second part we will instead focus on Information Extraction, with an emphasis on Open Information Extraction (OIE), where issues like sparsity and lexical ambiguity are especially critical, and study how to exploit at best sense-level information within the extraction process. We will start by showing that enforcing a deeper semantic analysis in a definitional setting enables a full-fledged extraction pipeline to compete with state-of-the-art approaches based on much larger (but noisier) data. We will then demonstrate how working at the sense level at the end of an extraction pipeline is also beneficial: indeed, by leveraging sense-based techniques, very heterogeneous OIE-derived data can be aligned semantically, and unified with respect to a common sense inventory. Finally, we will briefly shift the focus to the more constrained setting of hypernym discovery, and study a sense-aware supervised framework for the task that is robust and effective, even when trained on heterogeneous OIE-derived hypernymic knowledge

    Mineração e uso de padrões linguísticos para desambiguação de palavras e análise do discurso

    Get PDF
    Tese (doutorado) - Universidade Federal de Santa Catarina, Centro Tecnológico, Programa de Pós-Graduação em Ciência da Computação, Florianópolis, 2020.A extração de informação contida em textos na web tem o potencial de alavancar uma série de aplicações, mas muitas delas requerem a captura automática da semântica exata de elementos textuais relevantes. O Twitter, por exemplo, gera diariamente centenas de milhões de pequenos textos (tweets), muitos dos quais com rica informação sobre usuários, fatos, produtos, serviços, desejos, opiniões, etc. A anotação semântica de palavras relevantes em tweets é um grande desafio, pois eles impõem dificuldades adicionais (e.g., pouca informação de contexto, agramaticalidade) para métodos automáticos realizarem uma desambiguação de qualidade, o que leva a resultados com baixa precisão e cobertura. Inclusive, porque a língua é um sistema simbólico polissêmico, que não tem uma semântica pronta, o que se manifesta acentuadamente em linguagem coloquial e particularmente em mídias sociais. As soluções atuais de anotação geralmente não conseguem encontrar o sentido correto de palavras em construções envolvendo a semântica implícita que, às vezes, é colocada intencionalmente, por exemplo, para fazer humor, ironia, jogo de palavras ou trocadilhos. Este trabalho propõe o desenvolvimento de uma abordagem para minerar padrões léxico-semânticos, com a finalidade de captar a semântica em texto para utilizar em tarefas que processam a linguagem. Estes padrões foram denominados de padrões MSC+, pois são definidos por sequências de Componentes Morfo-semânticos (MSC). Um algoritmo não-supervisionado foi desenvolvido para minerar tais padrões, que suportam a identificação de um novo tipo de característica semântica em documentos, assim como métodos para desambiguar o sentido de palavras. Os resultados de experimentos com a tarefa de Word Sense Disambiguation (WSD), em texto de mídia social, mostraram que instâncias de alguns padrões MSC+ aparecem em vários tweets, mas às vezes usando palavras diferentes para transmitir o sentido. Os testes realizados nos resultados do experimento em WSD demonstraram que a exploração dos padrões MSC+ permite mecanismos eficazes na desambiguação do sentido de palavras, levando a melhorias no estado da arte, segundo medidas de precisão, cobertura e medida-F. Os padrões MSC+ também foram explorados em experimentos com Análise do Discurso (AD) do conteúdo de diferentes obras do escritor Machado de Assis. Os experimentos revelaram a incidência de padrões morfo-semânticos que evidenciam características de obras literárias e que podem auxiliar na classificação de discurso das obras analisadas, tais como a preponderância de verbos específicos nos contos, de substantivos femininos nos romances e adjetivos nos poemas.Abstract: Information extraction from social media texts has the potential to boost a number of applications, but many of them require the automatic capture of accurate semantics of relevant textual elements. Twitter, for example, generates hundreds of millions of short texts (tweets) daily, many of which containing rich information about users, facts, products, services, desires, opinions, etc. The semantic annotation of relevant words in tweets is a challenge because social media impose additional difficulties (e.g., little context information, poor grammatical rules conformity) for automatic methods to carry out quality disambiguation. It leads to results with low accuracy and coverage. In addition, a language is a polysemic symbolic system without ready semantics for some constructs. Sometimes words have implicit semantics (e.g., to make humor, irony, wordplay). It is common in colloquial language, and particularly in social media. In this work, we propose the development of an approach to mine lexical-semantic patterns and capture the semantics of texts for use in language processing tasks. We learn these patterns, that we call MSC+ patterns, from text data defined by Morpho-semantic Components (MSC). An unsupervised algorithm was developed to mine such patterns, which support the identification of a new kind of semantic feature in documents, as well as methods for disambiguating the meaning of words. Experimental results on Word Sense Disambiguation (WSD) task, from tweets, show that instances of some MSC+ patterns arise in many tweets, but sometimes using different words to convey the sense of the respective MSC in some tweets where pattern instances appear. The exploitation of MSC+ patterns when they induce semantics on target words enables effective word sense disambiguation mechanisms leading to improvements in the state of the art (e.g., metrics such as accuracy, coverage, and F-measure). We also explored the MSC+ patterns on the Discourse Analysis (DA) with literary content. Experimental results on selected works of a Brazilian writer submitted to our algorithm reveal the incidence of distinct morpho-semantic patterns in different types of works, such as the preponderance of specific verbs in tales, feminine nouns in romances, and adjectives in poems

    Computational approaches to semantic change (Volume 6)

    Get PDF
    Semantic change — how the meanings of words change over time — has preoccupied scholars since well before modern linguistics emerged in the late 19th and early 20th century, ushering in a new methodological turn in the study of language change. Compared to changes in sound and grammar, semantic change is the least understood. Ever since, the study of semantic change has progressed steadily, accumulating a vast store of knowledge for over a century, encompassing many languages and language families. Historical linguists also early on realized the potential of computers as research tools, with papers at the very first international conferences in computational linguistics in the 1960s. Such computational studies still tended to be small-scale, method-oriented, and qualitative. However, recent years have witnessed a sea-change in this regard. Big-data empirical quantitative investigations are now coming to the forefront, enabled by enormous advances in storage capability and processing power. Diachronic corpora have grown beyond imagination, defying exploration by traditional manual qualitative methods, and language technology has become increasingly data-driven and semantics-oriented. These developments present a golden opportunity for the empirical study of semantic change over both long and short time spans

    Contributions to information extraction for spanish written biomedical text

    Get PDF
    285 p.Healthcare practice and clinical research produce vast amounts of digitised, unstructured data in multiple languages that are currently underexploited, despite their potential applications in improving healthcare experiences, supporting trainee education, or enabling biomedical research, for example. To automatically transform those contents into relevant, structured information, advanced Natural Language Processing (NLP) mechanisms are required. In NLP, this task is known as Information Extraction. Our work takes place within this growing field of clinical NLP for the Spanish language, as we tackle three distinct problems. First, we compare several supervised machine learning approaches to the problem of sensitive data detection and classification. Specifically, we study the different approaches and their transferability in two corpora, one synthetic and the other authentic. Second, we present and evaluate UMLSmapper, a knowledge-intensive system for biomedical term identification based on the UMLS Metathesaurus. This system recognises and codifies terms without relying on annotated data nor external Named Entity Recognition tools. Although technically naive, it performs on par with more evolved systems, and does not exhibit a considerable deviation from other approaches that rely on oracle terms. Finally, we present and exploit a new corpus of real health records manually annotated with negation and uncertainty information: NUBes. This corpus is the basis for two sets of experiments, one on cue andscope detection, and the other on assertion classification. Throughout the thesis, we apply and compare techniques of varying levels of sophistication and novelty, which reflects the rapid advancement of the field

    Tune your brown clustering, please

    Get PDF
    Brown clustering, an unsupervised hierarchical clustering technique based on ngram mutual information, has proven useful in many NLP applications. However, most uses of Brown clustering employ the same default configuration; the appropriateness of this configuration has gone predominantly unexplored. Accordingly, we present information for practitioners on the behaviour of Brown clustering in order to assist hyper-parametre tuning, in the form of a theoretical model of Brown clustering utility. This model is then evaluated empirically in two sequence labelling tasks over two text types. We explore the dynamic between the input corpus size, chosen number of classes, and quality of the resulting clusters, which has an impact for any approach using Brown clustering. In every scenario that we examine, our results reveal that the values most commonly used for the clustering are sub-optimal

    TALN at SemEval-2016 Task 14: semantic taxonomy enrichment via sense-based embeddings

    No full text
    This paper describes the participation of the TALN team in SemEval-2016 Task 14: Semantic Taxonomy Enrichment. The purpose of the task is to find the best point of attachment in WordNet for a set of Out of Vocabulary (OOV) terms. These may come, to name a few, from domain specific glossaries, slang or typical jargon from Internet forums and chatrooms. Our contribution takes as input an OOV term, its part of speech and its associated definition, and generates a set of WordNet synset candidates derived from modelling the term’s definition as a sense embedding representation. We leverage a BabelNet-based vector space representation, which allows us to map the algorithm’s prediction to WordNet. Our approach is designed to be generic and fitting to any domain, without exploiting, for instance, HTML markup in source web pages. Our system performs above the median of all submitted systems, and rivals in performance a powerful baseline based on extracting the first word of the definition with the same partof-speech as the OOV term.This work was partially funded by Dr. Inventor (FP7-ICT-2013.8.1611383), and by the Spanish Ministry of Economy and Competitiveness under the Maria de Maeztu Units of Excellence Programme (MDM-2015-0502)

    European Language Grid

    Get PDF
    This open access book provides an in-depth description of the EU project European Language Grid (ELG). Its motivation lies in the fact that Europe is a multilingual society with 24 official European Union Member State languages and dozens of additional languages including regional and minority languages. The only meaningful way to enable multilingualism and to benefit from this rich linguistic heritage is through Language Technologies (LT) including Natural Language Processing (NLP), Natural Language Understanding (NLU), Speech Technologies and language-centric Artificial Intelligence (AI) applications. The European Language Grid provides a single umbrella platform for the European LT community, including research and industry, effectively functioning as a virtual home, marketplace, showroom, and deployment centre for all services, tools, resources, products and organisations active in the field. Today the ELG cloud platform already offers access to more than 13,000 language processing tools and language resources. It enables all stakeholders to deposit, upload and deploy their technologies and datasets. The platform also supports the long-term objective of establishing digital language equality in Europe by 2030 – to create a situation in which all European languages enjoy equal technological support. This is the very first book dedicated to Language Technology and NLP platforms. Cloud technology has only recently matured enough to make the development of a platform like ELG feasible on a larger scale. The book comprehensively describes the results of the ELG project. Following an introduction, the content is divided into four main parts: (I) ELG Cloud Platform; (II) ELG Inventory of Technologies and Resources; (III) ELG Community and Initiative; and (IV) ELG Open Calls and Pilot Projects
    corecore