60 research outputs found

    Generalising semantic category disambiguation with large lexical resources for fun and profit

    Full text link

    Proceedings of the Conference on Natural Language Processing 2010

    Get PDF
    This book contains state-of-the-art contributions to the 10th conference on Natural Language Processing, KONVENS 2010 (Konferenz zur Verarbeitung natürlicher Sprache), with a focus on semantic processing. The KONVENS in general aims at offering a broad perspective on current research and developments within the interdisciplinary field of natural language processing. The central theme draws specific attention towards addressing linguistic aspects ofmeaning, covering deep as well as shallow approaches to semantic processing. The contributions address both knowledgebased and data-driven methods for modelling and acquiring semantic information, and discuss the role of semantic information in applications of language technology. The articles demonstrate the importance of semantic processing, and present novel and creative approaches to natural language processing in general. Some contributions put their focus on developing and improving NLP systems for tasks like Named Entity Recognition or Word Sense Disambiguation, or focus on semantic knowledge acquisition and exploitation with respect to collaboratively built ressources, or harvesting semantic information in virtual games. Others are set within the context of real-world applications, such as Authoring Aids, Text Summarisation and Information Retrieval. The collection highlights the importance of semantic processing for different areas and applications in Natural Language Processing, and provides the reader with an overview of current research in this field

    Knowledge-driven entity recognition and disambiguation in biomedical text

    Get PDF
    Entity recognition and disambiguation (ERD) for the biomedical domain are notoriously difficult problems due to the variety of entities and their often long names in many variations. Existing works focus heavily on the molecular level in two ways. First, they target scientific literature as the input text genre. Second, they target single, highly specialized entity types such as chemicals, genes, and proteins. However, a wealth of biomedical information is also buried in the vast universe of Web content. In order to fully utilize all the information available, there is a need to tap into Web content as an additional input. Moreover, there is a need to cater for other entity types such as symptoms and risk factors since Web content focuses on consumer health. The goal of this thesis is to investigate ERD methods that are applicable to all entity types in scientific literature as well as Web content. In addition, we focus on under-explored aspects of the biomedical ERD problems -- scalability, long noun phrases, and out-of-knowledge base (OOKB) entities. This thesis makes four main contributions, all of which leverage knowledge in UMLS (Unified Medical Language System), the largest and most authoritative knowledge base (KB) of the biomedical domain. The first contribution is a fast dictionary lookup method for entity recognition that maximizes throughput while balancing the loss of precision and recall. The second contribution is a semantic type classification method targeting common words in long noun phrases. We develop a custom set of semantic types to capture word usages; besides biomedical usage, these types also cope with non-biomedical usage and the case of generic, non-informative usage. The third contribution is a fast heuristics method for entity disambiguation in MEDLINE abstracts, again maximizing throughput but this time maintaining accuracy. The fourth contribution is a corpus-driven entity disambiguation method that addresses OOKB entities. The method first captures the entities expressed in a corpus as latent representations that comprise in-KB and OOKB entities alike before performing entity disambiguation.Die Erkennung und Disambiguierung von Entitäten für den biomedizinischen Bereich stellen, wegen der vielfältigen Arten von biomedizinischen Entitäten sowie deren oft langen und variantenreichen Namen, große Herausforderungen dar. Vorhergehende Arbeiten konzentrieren sich in zweierlei Hinsicht fast ausschließlich auf molekulare Entitäten. Erstens fokussieren sie sich auf wissenschaftliche Publikationen als Genre der Eingabetexte. Zweitens fokussieren sie sich auf einzelne, sehr spezialisierte Entitätstypen wie Chemikalien, Gene und Proteine. Allerdings bietet das Internet neben diesen Quellen eine Vielzahl an Inhalten biomedizinischen Wissens, das vernachlässigt wird. Um alle verfügbaren Informationen auszunutzen besteht der Bedarf weitere Internet-Inhalte als zusätzliche Quellen zu erschließen. Außerdem ist es auch erforderlich andere Entitätstypen wie Symptome und Risikofaktoren in Betracht zu ziehen, da diese für zahlreiche Inhalte im Internet, wie zum Beispiel Verbraucherinformationen im Gesundheitssektor, relevant sind. Das Ziel dieser Dissertation ist es, Methoden zur Erkennung und Disambiguierung von Entitäten zu erforschen, die alle Entitätstypen in Betracht ziehen und sowohl auf wissenschaftliche Publikationen als auch auf andere Internet-Inhalte anwendbar sind. Darüber hinaus setzen wir Schwerpunkte auf oft vernachlässigte Aspekte der biomedizinischen Erkennung und Disambiguierung von Entitäten, nämlich Skalierbarkeit, lange Nominalphrasen und fehlende Entitäten in einer Wissensbank. In dieser Hinsicht leistet diese Dissertation vier Hauptbeiträge, denen allen das Wissen von UMLS (Unified Medical Language System), der größten und wichtigsten Wissensbank im biomedizinischen Bereich, zu Grunde liegt. Der erste Beitrag ist eine schnelle Methode zur Erkennung von Entitäten mittels Lexikonabgleich, welche den Durchsatz maximiert und gleichzeitig den Verlust in Genauigkeit und Trefferquote (precision and recall) balanciert. Der zweite Beitrag ist eine Methode zur Klassifizierung der semantischen Typen von Nomen, die sich auf gebräuchliche Nomen von langen Nominalphrasen richtet und auf einer selbstentwickelten Sammlung von semantischen Typen beruht, die die Verwendung der Nomen erfasst. Neben biomedizinischen können diese Typen auch nicht-biomedizinische und allgemeine, informationsarme Verwendungen behandeln. Der dritte Beitrag ist eine schnelle Heuristikmethode zur Disambiguierung von Entitäten in MEDLINE Kurzfassungen, welche den Durchsatz maximiert, aber auch die Genauigkeit erhält. Der vierte Beitrag ist eine korpusgetriebene Methode zur Disambiguierung von Entitäten, die speziell fehlende Entitäten in einer Wissensbank behandelt. Die Methode wandelt erst die Entitäten, die in einem Textkorpus ausgedrückt aber nicht notwendigerweise in einer Wissensbank sind, in latente Darstellungen um und führt anschließend die Disambiguierung durch

    Domain-sensitive topic management in a modular conversational agent framework

    Get PDF
    Flexible nontask-oriented conversational agents require content for generating responses and mechanisms that serve them for choosing appropriate topics to drive interactions with users. Structured knowledge resources such as ontologies are a useful mechanism to represent conversational topics. In order to develop the topic-management mechanism, we addressed a number of research issues related to the development of the required infrastructure. First, we address the issue of heavy human involvement in the construction of knowledge resources by proposing a four-stage automatic process for building domain-specific ontologies. These ontologies are comprised of a set of subtaxonomies obtained from WordNet, an electronic dictionary that arranges concepts in a hierarchical structure. The roots of these subtaxonomies are obtained from Wikipedia’s article links or wikilinks; this under the hypothesis that wikilinks provide a sense of relatedness from the article consulted to their destinations. With the knowledge structures defined, we explore the possibility of using semantic relatedness over these domain-specific ontologies as a mean to propose conversational topics in a coherent manner. For this, we examine different automatic measures of semantic relatedness to determine which correlates with human judgements obtained from an automatically constructed dataset. We then examine the question of whether domain information influences the human perception of semantic relatedness in a way that automatic measures do not replicate. This study requires us to design and implement a process to build datasets with pairs of concepts as those used in the literature to evaluate automatic measures of semantic relatedness, but with domain information associated. This study shows, to statistical significance, that existing measures of semantic relatedness do not take domain into consideration, and that including domain as a factor in this calculation can enhance the agreement of automatic measures with human assessments. Finally, this artificially constructed measure is integrated into the Toy’s dialogue manager, in order to help in the real-time selection of conversational topics. This supplements our result that the use of semantic relatedness seems to produce more coherent and interesting topic transitions than existing mechanisms

    Web information search and sharing :

    Get PDF
    制度:新 ; 報告番号:甲2735号 ; 学位の種類:博士(人間科学) ; 授与年月日:2009/3/15 ; 早大学位記番号:新493

    Neural models of language use:Studies of language comprehension and production in context

    Get PDF
    Artificial neural network models of language are mostly known and appreciated today for providing a backbone for formidable AI technologies. This thesis takes a different perspective. Through a series of studies on language comprehension and production, it investigates whether artificial neural networks—beyond being useful in countless AI applications—can serve as accurate computational simulations of human language use, and thus as a new core methodology for the language sciences

    Translating Zola's L’Assommoir: a stylistic approach

    Get PDF
    In the following thesis, we will be applying a number of linguistic, stylistic and critical techniques with a view to elucidating the phenomenon of literary translation. Our corpus will be drawn almost exclusively from Emile Zola's late nineteenth-century French classic, L'Assommoir, and seven English-language translations thereof: the focus of the thesis throughout being upon the analysis of concrete examples. Our aim is thus to arrive at a substantial body of analytical knowledge through the exploration of translation in practice rather than through a series of secondary commentaries upon other works of translation theory. Reference will, of course, be made to such works when appropriate. One of the principal premises of the thesis is that linguistic techniques can indeed be applied to a corpus of literary text without sacrificing traditional critical judgement or the possibility of rational evaluation. Accordingly, we will be concerned to formulate reasoned and explicit parameters of assessment throughout the course of our analysis. In particular, we will be seeking to illuminate the various facets of what we call 'literary texture' and how these might be rendered in translation. In certain cases, one rendering may be preferred to another, although no attempt will be made to rank the respective translations by order of merit in overall terms. Occasionally, we will also be hazarding our own versions when those drawn from the corpus prove to be unsatisfactory. Similarly, a 'proposed translation' is offered at the conclusion of every major passage studied. These translations are, of course, to be considered as open and heuristic explorations rather than prescriptive or definitive corrections. Our thesis will be divided into seven main chapters, each one of which is designed to illustrate the phenomenon of literary translation from a slightly different angle. In the first one of these, we map out the basic methodological template of the thesis. In the second, we examine various aspects of the decision making process involved in 'choosing the right word'. This lengthy second chapter is then followed by an analysis of the postulate that translations tend to be more periphrastic and explicit than originals. We then move onto the thorny terrain of prose rhythm, examining how the particular beat and pressure of the original text might be made to resonate within the echo chamber of another language. In the fifth and sixth chapters, we consider the difficulties involved in transcribing the specificity of colloquial language and slang into both written and translated form. Our study concludes with an exploration of Zola's écriture artiste, paying particular attention to the way in which the translators render the various figurative torques and twists characterising this highly aesthetic style of description. It is to be hoped that our thesis will be of interest both to students of translation in general and to Zola scholars in particular. I hereby declare that all work contained in this thesis not otherwise referenced is to be considered my own

    Let’s lie together:Co-presence effects on children’s deceptive skills

    Get PDF
    corecore