8 research outputs found

    Generating Effective Sentence Representations: Deep Learning and Reinforcement Learning Approaches

    Get PDF
    Natural language processing (NLP) is one of the most important technologies of the information age. Understanding complex language utterances is also a crucial part of artificial intelligence. Many Natural Language applications are powered by machine learning models performing a large variety of underlying tasks. Recently, deep learning approaches have obtained very high performance across many NLP tasks. In order to achieve this high level of performance, it is crucial for computers to have an appropriate representation of sentences. The tasks addressed in the thesis are best approached having shallow semantic representations. These representations are vectors that are then embedded in a semantic space. We present a variety of novel approaches in deep learning applied to NLP for generating effective sentence representations in this space. These semantic representations can either be general or task-specific. We focus on learning task-specific sentence representations, where often these tasks have a good amount of overlap. We design a set of general purpose and task specific sentence encoders combining both word-level semantic knowledge and word- and sentence-level syntactic information. As a method for the former, we perform an intelligent amalgamation of word vectors using modern deep learning modules. For the latter, we use word-level knowledge, such as parts of speech, spelling, and suffix features, and sentence-level information drawn from natural language parse trees which provide the hierarchical structure of a sentence together with grammatical relations between the words. Further expertise is added with reinforcement learning which guides a machine learning model through a reward-penalty game. Rather than just striving for good performance, we always try to design models that are more transparent and explainable. We provide an intuitive explanation about the design of each model and how the model is making a decision. Our extensive experiments show that these models achieve competitive performance compared with the currently available state-of-the-art generalized and task-specific sentence encoders. All but one of the tasks dealt with English language texts. The multilingual semantic similarity task required creating a multilingual corpus for which we provide a novel semi-supervised approach to make artificial negative samples in the presence of just positive samples

    Natural Language Processing: Emerging Neural Approaches and Applications

    Get PDF
    This Special Issue highlights the most recent research being carried out in the NLP field to discuss relative open issues, with a particular focus on both emerging approaches for language learning, understanding, production, and grounding interactively or autonomously from data in cognitive and neural systems, as well as on their potential or real applications in different domains

    Context-Aware Message-Level Rumour Detection with Weak Supervision

    Get PDF
    Social media has become the main source of all sorts of information beyond a communication medium. Its intrinsic nature can allow a continuous and massive flow of misinformation to make a severe impact worldwide. In particular, rumours emerge unexpectedly and spread quickly. It is challenging to track down their origins and stop their propagation. One of the most ideal solutions to this is to identify rumour-mongering messages as early as possible, which is commonly referred to as "Early Rumour Detection (ERD)". This dissertation focuses on researching ERD on social media by exploiting weak supervision and contextual information. Weak supervision is a branch of ML where noisy and less precise sources (e.g. data patterns) are leveraged to learn limited high-quality labelled data (Ratner et al., 2017). This is intended to reduce the cost and increase the efficiency of the hand-labelling of large-scale data. This thesis aims to study whether identifying rumours before they go viral is possible and develop an architecture for ERD at individual post level. To this end, it first explores major bottlenecks of current ERD. It also uncovers a research gap between system design and its applications in the real world, which have received less attention from the research community of ERD. One bottleneck is limited labelled data. Weakly supervised methods to augment limited labelled training data for ERD are introduced. The other bottleneck is enormous amounts of noisy data. A framework unifying burst detection based on temporal signals and burst summarisation is investigated to identify potential rumours (i.e. input to rumour detection models) by filtering out uninformative messages. Finally, a novel method which jointly learns rumour sources and their contexts (i.e. conversational threads) for ERD is proposed. An extensive evaluation setting for ERD systems is also introduced

    False textual information detection, a deep learning approach

    Get PDF
    Many approaches exist for analysing fact checking for fake news identification, which is the focus of this thesis. Current approaches still perform badly on a large scale due to a lack of authority, or insufficient evidence, or in certain cases reliance on a single piece of evidence. To address the lack of evidence and the inability of models to generalise across domains, we propose a style-aware model for detecting false information and improving existing performance. We discovered that our model was effective at detecting false information when we evaluated its generalisation ability using news articles and Twitter corpora. We then propose to improve fact checking performance by incorporating warrants. We developed a highly efficient prediction model based on the results and demonstrated that incorporating is beneficial for fact checking. Due to a lack of external warrant data, we develop a novel model for generating warrants that aid in determining the credibility of a claim. The results indicate that when a pre-trained language model is combined with a multi-agent model, high-quality, diverse warrants are generated that contribute to task performance improvement. To resolve a biased opinion and making rational judgments, we propose a model that can generate multiple perspectives on the claim. Experiments confirm that our Perspectives Generation model allows for the generation of diverse perspectives with a higher degree of quality and diversity than any other baseline model. Additionally, we propose to improve the model's detection capability by generating an explainable alternative factual claim assisting the reader in identifying subtle issues that result in factual errors. The examination demonstrates that it does indeed increase the veracity of the claim. Finally, current research has focused on stance detection and fact checking separately, we propose a unified model that integrates both tasks. Classification results demonstrate that our proposed model outperforms state-of-the-art methods

    Automated Deduction – CADE 28

    Get PDF
    This open access book constitutes the proceeding of the 28th International Conference on Automated Deduction, CADE 28, held virtually in July 2021. The 29 full papers and 7 system descriptions presented together with 2 invited papers were carefully reviewed and selected from 76 submissions. CADE is the major forum for the presentation of research in all aspects of automated deduction, including foundations, applications, implementations, and practical experience. The papers are organized in the following topics: Logical foundations; theory and principles; implementation and application; ATP and AI; and system descriptions

    Red ink: Native Americans picking up the pen in the colonial period

    Get PDF
    This dissertation looks at the ways that Native Americans appropriated alphabetic literacy for their own purposes in the colonial period. Studies of Native writing tend to begin with the Mohegan preacher Samson Occom whose A Sermon Preached by Samson Occom (1772) is the first known publication by a Native author on the North American continent. This work, however, locates Occom near the end of a series of earlier Native contacts with the written word, the fragments of which are scattered throughout the archive of the colonizer. While scholars have become largely familiarized with the representational modes in American literature that force the Native figure into patterns of either assimilation or extinction, I complicate this paradigm by exploring the interventions of seventeenth and eighteenth-century Natives whose writings reflect active attempts at community building within traditional Native frameworks. I argue that once Native writings are removed from their colonized contexts and recentered in Native space, we begin to see how such notes, letters, fragments, written testimonies, and eventually, publications were composed in the service of survivance and continuance rather than as capitulations to the dominant culture. Too often the Native acquisition of literacy has been equated with being fitting into a cultural straight jacket, as though once the rhetorics of print discourse have been adopted, one can speak only through the colonizer\u27s voice. Not until recently have some critics, particularly Native American scholars, come to question the interpretive utility of such convictions, and begun to think instead upon the contiguous line of Native tradition that runs from the era prior to colonization into the present day. I draw from the archival resources of both American and Native American Literature in an attempt to review the phenomena of colonization as a series of negotiations and survival strategies that can be more fully comprehended through a focused recognition of indigenous rhetorical and intellectual traditions. Rather than regarding the moment (or moments) of cultural contact as one in which European culture violently and tragically dismantles Native culture, I suggest how an understanding of this period must be complicated by a deeper recognition of the communitarian responses Native Americans were forging to European presence on indigenous soil

    El modelo cortical HTM y su aplicación al conocimiento lingüístico

    Get PDF
    El problema que aborda este trabajo de investigación es encontrar un modelo neurocomputacional de representación y comprensión del conocimiento léxico, utilizando para ello el algoritmo cortical HTM, que modela el mecanismo según el cual se procesa la información en el neocórtex humano. La comprensión automática del lenguaje natural implica que las máquinas tengan un conocimiento profundo del lenguaje natural, lo que, actualmente, está muy lejos de conseguirse. En general, los modelos computacionales para el Procesamiento del Lenguaje Natural (PLN), tanto en su vertiente de análisis y comprensión como en la de generación, utilizan algoritmos fundamentados en modelos matemáticos y lingüísticos que intentan emular la forma en la que tradicionalmente se ha procesado el lenguaje, por ejemplo, obteniendo la estructura jerárquica implícita de las frases o las desinencias de las palabras. Estos modelos son útiles porque sirven para construir aplicaciones concretas como la extracción de datos, la clasificación de textos o el análisis de opinión. Sin embargo, a pesar de su utilidad, las máquinas realmente no entienden lo que hacen con ninguno de estos modelos. Por tanto, la pregunta que se aborda en este trabajo es si, realmente, es posible modelar computacionalmente los procesos neocorticales humanos que regulan el tratamiento de la información de tipo semántico del léxico. Esta cuestión de investigación constituye el primer nivel para comprender el procesamiento del lenguaje natural a niveles lingüísticos superiores..
    corecore