658,736 research outputs found

    ExBERT: An External Knowledge Enhanced BERT for Natural Language Inference

    Get PDF
    Neural language representation models such as BERT, pretrained on large-scale unstructured corpora lack explicit grounding to real-world commonsense knowledge and are often unable to remember facts required for reasoning and inference. Natural Language Inference (NLI) is a challenging reasoning task that relies on common human understanding of language and real-world commonsense knowledge. We introduce a new model for NLI called External Knowledge Enhanced BERT (ExBERT), to enrich the contextual representation with realworld commonsense knowledge from external knowledge sources and enhance BERT’s language understanding and reasoning capabilities. ExBERT takes full advantage of contextual word representations obtained from BERT and employs them to retrieve relevant external knowledge from knowledge graphs and to encode the retrieved external knowledge. Our model adaptively incorporates the external knowledge context required for reasoning over the inputs. Extensive experiments on the challenging SciTail and SNLI benchmarks demonstrate the effectiveness of ExBERT: in comparison to the previous state-of-the-art, we obtain an accuracy of 95.9% on SciTail and 91.5% on SNLI

    Beyond information extraction: The role of ontology in military report processing

    Get PDF
    Information extraction tools like SMES transform natural language into formal representation, e.g. into feature structures. Doing so, these tools exploit and apply linguistic knowledge about the syntactic and morphological regularities of the language used. However, these tools apply semantic as well as pragmatic knowledge only partially at best. Automatic processing of military reports has to result in a visualization of the reports content by map as well as in an actualization of the underlying database in order to allow for the actualization of the common operational picture. Normally, however, the information provided by the result of the information extraction is not explicit enough for visualization processes and database insertions. This originates from the reports themselves that are elliptical, ambiguous, and vague. In order to overcome this obstacle, the situational context and thus semantic and pragmatic aspects have to be taken into account. In the paper at hand, we present a system that uses an ontological module to integrate semantic and pragmatic knowledge. The result of the completion contains all the specifications to allow for a visualization of the report’s content on a map as well as for a database actualization

    SPECIAL ISSUE: New Insights into Meaning Construction and Knowledge Representation

    Get PDF
    The ten papers that have been selected for publication in this special issue entitled “New Insights into Meaning Construction and Knowledge Representation” present the outcomes of recent relevant investigations conducted both within Spain and international contexts, and which have been supported by research projects related to various aspects of meaning and knowledge representation. In particular, the findings presented in this volume combine insights from theoretical and computational linguistics in the context of natural language understanding, with parallel studies conducted within the realm of cognitive linguistics with special reference to the role of metaphor and other cognitive operations in meaning construction. Many of the contributions that are presented here are examples of the integration and collaboration between linguistics and other diverse fields such as Natural Language Processing (NLP), semantic memory loss disorders, aeronautic engineering or computer science, that reveal the need to link contemporary linguistics to other arenas that may have a direct and significant impact on society..

    A Knowledge-based approach to understanding natural language

    Get PDF
    Understanding a natural language requires knowledge about that language as a system of representation. Further, when the task is one of understanding an extended discourse, world knowledge is also required. This thesis explores some of the issues involved in representing both kinds of knowledge, and also makes an effort to arrive at some under standing of the relationship between the two. A part of this exploration involves an examination of some natural language understanding systems which have attempted to deal with extended discourse both in the form of stories and in the form of dialogues. The systems exam ined are heavily dependent on world knowledge. Another part of this exploration is an effort to build a dialogue system based on speech acts and practical argu ments, as they are described in Recognizing Promises, Advice, Threats, and Warnings , a Masters Thesis presented to Rochester Institute of Technology, School of Computer Science and Technology, in 1986 by Kevin Donaghy. This dialogue system includes a deterministic syntactic parser, a semantic representation based on the idea of case frames, and a context interpreter that recognizes and represents groups of sentences as practical arguments. This Prolog implementation employs a frame package developed and described in A Frame Virtual Machine in C-Prolog , a Masters Thesis presented to Rochester Institute of Technology, School of Computer Science and Technology, in 1987 by LeMora Hiss. While this automated dialogue system is necessarily limited in the domain that it recognizes, the opportunity it allows to build a mechanism and a system of representation brings with it a range of issues from the syntactic, through the semantic, to the contextual and the pragmatic. Here, the focus of inquiry came to settle in the semantic representa tion, where the relationship between knowledge about language and knowledge about the world seems to be naturally resident

    Enhancing natural language understanding using meaning representation and deep learning

    Get PDF
    Natural Language Understanding (NLU) is one of the complex tasks in artificial intelligence. Machine learning was introduced to address the complex and dynamic nature of natural language. Deep learning gained popularity within the NLU community due to its capability of learning features directly from data, as well as learning from the dynamic nature of natural language. Furthermore, deep learning has shown to be able to learn the hidden feature(s) automatically and outperform most of the other machine learning approaches for NLU. Deep learning models require natural language inputs to be converted to vectors (word embedding). Word2Vec and GloVe are word embeddings which are designed to capture the analogy context-based statistics and provide lexical relations on words. Using the context-based statistical approach does not capture the prior knowledge required to understand language combined with words. Although a deep learning model receives word embedding, language understanding requires Reasoning, Attention and Memory (RAM). RAM are key factors in understanding language. Current deep learning models focus either on reasoning, attention or memory. In order to properly understand a language however, all three factors of RAM should be considered. Also, a language normally has a long sequence. This long sequence creates dependencies which are required in order to understand a language. However, current deep learning models, which are developed to hold longer sequences, either forget or get affected by the vanishing or exploding gradient descent. In this thesis, these three main areas are of focus. A word embedding technique, which integrates analogy context-based statistical and semantic relationships, as well as extracts from a knowledge base to hold enhanced meaning representation, is introduced. Also, a Long Short-Term Reinforced Memory (LSTRM) network is introduced. This addresses RAM and is validated by testing on question answering data sets which require RAM. Finally, a Long Term Memory Network (LTM) is introduced to address language modelling. Good language modelling requires learning from long sequences. Therefore, this thesis demonstrates that integrating semantic knowledge and a knowledge base generates enhanced meaning and deep learning models that are capable of achieving RAM and long-term dependencies so as to improve the capability of NLU
    • …
    corecore