82,794 research outputs found

    A Model of Anaphoric Ambiguities using Sheaf Theoretic Quantum-like Contextuality and BERT

    Get PDF
    Ambiguities of natural language do not preclude us from using it and context helps in getting ideas across. They, nonetheless, pose a key challenge to the development of competent machines to understand natural language and use it as humans do. Contextuality is an unparalleled phenomenon in quantum mechanics, where different mathematical formalisms have been put forwards to understand and reason about it. In this paper, we construct a schema for anaphoric ambiguities that exhibits quantum-like contextuality. We use a recently developed criterion of sheaf-theoretic contextuality that is applicable to signalling models. We then take advantage of the neural word embedding engine BERT to instantiate the schema to natural language examples and extract probability distributions for the instances. As a result, plenty of sheaf-contextual examples were discovered in the natural language corpora BERT utilises. Our hope is that these examples will pave the way for future research and for finding ways to extend applications of quantum computing to natural language processing

    Using Textual Emotion Extraction in Context-Aware Computing

    Get PDF
    In 2016, the number of global smartphone users will surpass 2 billion. The common owner uses about 27 apps monthly. On average, users of SwiftKey, an alternative Android software keyboard, type approximately 1800 characters a day. Still, all of the user-generated data of these apps is, for the most part, unused by the owner itself. To change this, we conducted research in Context-Aware Computing, Natural Language Processing and Affective Computing. The goal was to create an environment for recording this non-used contextual data without losing its historical context and to create an algorithm that is able to extract emotions from text. Therefore, we are introducing Emotext, a textual emotion extraction algorithm that uses conceptnet5’s realworld knowledge for word-interpretation, as well as Cofra, a framework for recording contextual data with time-based versioning

    a state of the art

    Get PDF
    The aim of this paper is to review the most important research initiatives concerning context in computer science. Context aspects are a key issue for many research communities like artificial intelligence, real time systems or mobile computing, because it relates information processing and communication to aspects of the situations in which such processing occurs. The overview addresses the ways context is defined and understood in various computer science fields and tries to estimate the role of context in the novel scenario of the Semantic Web, by studying the particularities of this setting, compared to the Artificial Intelligence or Natural Language Processing ones, and the consequences of these particularities in resolving the key questions concerning contextual aspects

    SkillBot: Towards Data Augmentation using Transformer language model and linguistic evaluation

    Get PDF
    Creating accurate, closed-domain, and machine learning-based chatbots that perform language understanding (intent prediction/detection) and language generation (response generation) requires significant datasets derived from specific knowledge domains. The common challenge in developing a closed-domain chatbot application is the lack of a comprehensive dataset. Such scarcity of the dataset can be complemented by augmenting the dataset with the use of state- of-the-art technologies existing in the field of Natural Language Processing, called ‘Transformer Models’. Our applied computing project experimented with a ‘Generative Pre-trained Transformer’ model, a unidirectional transformer decoder model for augmenting an original dataset limited in size and manually authored. This model uses unidirectional contextual representation i.e., text input is processed from left to right while computing embeddings corresponding to the input sentences. The primary goal of the project was to leverage the potential of a pre-trained transformer-based language model in augmenting an existing, but limited dataset. Additionally, the idea for using the model for text generation and appending the generated embedding to the input embedding supplied was to preserve the intent for the augmented utterances as well as to find a different form of expressions for the same intent which could be expressed by the potential users in the future. Our experiment showed improved performance for understanding language and generation for the chatbot model trained on the augmented dataset indicating that a pre-trained language model can be beneficial for the effective working of natural language-based applications such as a chatbot model trained on the augmented dataset indicating that a pre-trained language model can be beneficial for the effective working of natural language-based applications such as a chatbo
    • …
    corecore