148 research outputs found

    Neural End-to-End Learning for Computational Argumentation Mining

    Full text link
    We investigate neural techniques for end-to-end computational argumentation mining (AM). We frame AM both as a token-based dependency parsing and as a token-based sequence tagging problem, including a multi-task learning setup. Contrary to models that operate on the argument component level, we find that framing AM as dependency parsing leads to subpar performance results. In contrast, less complex (local) tagging models based on BiLSTMs perform robustly across classification scenarios, being able to catch long-range dependencies inherent to the AM problem. Moreover, we find that jointly learning 'natural' subtasks, in a multi-task learning setup, improves performance.Comment: To be published at ACL 201

    Aspect-Controlled Neural Argument Generation

    Full text link
    We rely on arguments in our daily lives to deliver our opinions and base them on evidence, making them more convincing in turn. However, finding and formulating arguments can be challenging. In this work, we train a language model for argument generation that can be controlled on a fine-grained level to generate sentence-level arguments for a given topic, stance, and aspect. We define argument aspect detection as a necessary method to allow this fine-granular control and crowdsource a dataset with 5,032 arguments annotated with aspects. Our evaluation shows that our generation model is able to generate high-quality, aspect-specific arguments. Moreover, these arguments can be used to improve the performance of stance detection models via data augmentation and to generate counter-arguments. We publish all datasets and code to fine-tune the language model

    Cross-lingual Argumentation Mining: Machine Translation (and a bit of Projection) is All You Need!

    Full text link
    Argumentation mining (AM) requires the identification of complex discourse structures and has lately been applied with success monolingually. In this work, we show that the existing resources are, however, not adequate for assessing cross-lingual AM, due to their heterogeneity or lack of complexity. We therefore create suitable parallel corpora by (human and machine) translating a popular AM dataset consisting of persuasive student essays into German, French, Spanish, and Chinese. We then compare (i) annotation projection and (ii) bilingual word embeddings based direct transfer strategies for cross-lingual AM, finding that the former performs considerably better and almost eliminates the loss from cross-lingual transfer. Moreover, we find that annotation projection works equally well when using either costly human or cheap machine translations. Our code and data are available at \url{http://github.com/UKPLab/coling2018-xling_argument_mining}.Comment: Accepted at Coling 201

    How to Probe Sentence Embeddings in Low-Resource Languages: On Structural Design Choices for Probing Task Evaluation

    Full text link
    Sentence encoders map sentences to real valued vectors for use in downstream applications. To peek into these representations - e.g., to increase interpretability of their results - probing tasks have been designed which query them for linguistic knowledge. However, designing probing tasks for lesser-resourced languages is tricky, because these often lack large-scale annotated data or (high-quality) dependency parsers as a prerequisite of probing task design in English. To investigate how to probe sentence embeddings in such cases, we investigate sensitivity of probing task results to structural design choices, conducting the first such large scale study. We show that design choices like size of the annotated probing dataset and type of classifier used for evaluation do (sometimes substantially) influence probing outcomes. We then probe embeddings in a multilingual setup with design choices that lie in a 'stable region', as we identify for English, and find that results on English do not transfer to other languages. Fairer and more comprehensive sentence-level probing evaluation should thus be carried out on multiple languages in the future

    Augmented SBERT: Data Augmentation Method for Improving Bi-Encoders for Pairwise Sentence Scoring Tasks

    Full text link
    There are two approaches for pairwise sentence scoring: Cross-encoders, which perform full-attention over the input pair, and Bi-encoders, which map each input independently to a dense vector space. While cross-encoders often achieve higher performance, they are too slow for many practical use cases. Bi-encoders, on the other hand, require substantial training data and fine-tuning over the target task to achieve competitive performance. We present a simple yet efficient data augmentation strategy called Augmented SBERT, where we use the cross-encoder to label a larger set of input pairs to augment the training data for the bi-encoder. We show that, in this process, selecting the sentence pairs is non-trivial and crucial for the success of the method. We evaluate our approach on multiple tasks (in-domain) as well as on a domain adaptation task. Augmented SBERT achieves an improvement of up to 6 points for in-domain and of up to 37 points for domain adaptation tasks compared to the original bi-encoder performance.Comment: Accepted at NAACL 202

    Cognitive Workload Induced by Information Systems: Introducing an Objective Way of Measuring based on Pupillary Diameter Responses

    Get PDF
    We present a novel method to derive users’ cognitive workload intensity based on their pupillary diameter responses using eye-tracking technology. Contrary to several prior instruments with a static measurement our new method is applicable to all kind of experimental setting with varying degrees of difficulty with a dynamic measurement The method uses a hybrid data analysis approach making it suitable for analyzing basic information systems supporting the fulfillment of less difficult tasks as well as for evaluating more complex information systems containing several dynamic web elements, interaction functions and advertising banners supporting the fulfillment of tasks of all kinds of difficulty. We successfully evaluated the method by two experiments with different settings. The results of these experiments based on pupillary diameter responses show significant differences between tasks of low, medium, and high demand levels, and outline the suitability of our new method to accurately estimate IS users’ cognitive workload in different scenarios

    APAE DE AREIA/PB: UM ESTUDO HISTÓRICO SOBRE SUA IMPORTÂNCIA PARA A COMUNIDADE

    Get PDF
    A pesquisa procurou compreender sobre a importância de uma escola especial (APAE de Areia/PB), quantoaos atendimentos educacionais especializados oferecidos à comunidade e identificar os desafios e avançosenfrentados pela instituição em meio a uma cultura de inclusão vigente, no Brasil. Utilizou-se o estudodocumental e entrevistas com a equipe de gestores, professores, especialistas, voluntários e pais. A instituiçãosurgiu pela necessidade de atendimento às pessoas com necessidades educacionais especiais devido à faltade acolhimento desses educandos nas escolas regulares no município. O serviço prestado pela instituição epelos professores aos educandos tem sido relevantes ao desenvolvimento integral dos mesmos, sobretudo,por garantir o atendimento educacional regular e especializado, na área de fisioterapia, fonoaudiologia,curso de Libras e complementando em alguns casos à educação regular

    Multi-Task Learning for Argumentation Mining in Low-Resource Settings

    Full text link
    We investigate whether and where multi-task learning (MTL) can improve performance on NLP problems related to argumentation mining (AM), in particular argument component identification. Our results show that MTL performs particularly well (and better than single-task learning) when little training data is available for the main task, a common scenario in AM. Our findings challenge previous assumptions that conceptualizations across AM datasets are divergent and that MTL is difficult for semantic or higher-level tasks.Comment: Accepted at NAACL 201
    corecore