12 research outputs found

    Digital Transformation of Science: AI-Assisted Collaborative Reading and Evaluation

    Get PDF
    Peer review is a common instrument of quality control in the academic world. New scientific knowledge is only accepted and, in many cases, only published when it has passed this barrier. However, in its current state, peer review has shortcomings, as it is time-intense and often unreliable. To help make peer reviewing faster and more reliable, and to give guidance in this process to young researchers, we are developing CARE (Collaborative Augmented Reading Environment, formerly PEER), an Artificial Intelligence (AI) assisted software to support researchers in the annotation phase of reading and evaluation of scientific publications. To provide the maximum benefit to researchers from any field, we design CARE to adapt to (i) the user, (ii) the domain of research, and (iii) the document at hand. This report first introduces the CARE software tool in greater detail. It then presents the setup and results of two studies performed at the Center for Advanced Internet Studies (CAIS), a survey and a user study. These had the aim to elucidate the requirements of the CAIS community members in CARE, and to test the usability of the software. The report summarizes the main findings, showing that the tool is useful to the study participants, and provides an outlook on the future of CARE

    Using natural language processing to support peer‐feedback in the age of artificial intelligence: a cross‐disciplinary framework and a research agenda

    Get PDF
    Advancements in artificial intelligence are rapidly increasing. The new-generation large language models, such as ChatGPT and GPT-4, bear the potential to transform educational approaches, such as peer-feedback. To investigate peer-feedback at the intersection of natural language processing (NLP) and educational research, this paper suggests a cross-disciplinary framework that aims to facilitate the development of NLP-based adaptive measures for supporting peer-feedback processes in digital learning environments. To conceptualize this process, we introduce a peer-feedback process model, which describes learners' activities and textual products. Further, we introduce a terminological and procedural scheme that facilitates systematically deriving measures to foster the peer-feedback process and how NLP may enhance the adaptivity of such learning support. Building on prior research on education and NLP, we apply this scheme to all learner activities of the peer-feedback process model to exemplify a range of NLP-based adaptive support measures. We also discuss the current challenges and suggest directions for future cross-disciplinary research on the effectiveness and other dimensions of NLP-based adaptive support for peer-feedback. Building on our suggested framework, future research and collaborations at the intersection of education and NLP can innovate peer-feedback in digital learning environments

    Using natural language processing to support peer‐feedback in the age of artificial intelligence: A cross‐disciplinary framework and a research agenda

    Get PDF
    Advancements in artificial intelligence are rapidly increasing. The new-generation large language models, such as ChatGPT and GPT-4, bear the potential to transform educational approaches, such as peer-feedback. To investigate peer-feedback at the intersection of natural language processing (NLP) and educational research, this paper suggests a cross-disciplinary framework that aims to facilitate the development of NLP-based adaptive measures for supporting peer-feedback processes in digital learning environments. To conceptualize this process, we introduce a peer-feedback process model, which describes learners' activities and textual products. Further, we introduce a terminological and procedural scheme that facilitates systematically deriving measures to foster the peer-feedback process and how NLP may enhance the adaptivity of such learning support. Building on prior research on education and NLP, we apply this scheme to all learner activities of the peer-feedback process model to exemplify a range of NLP-based adaptive support measures. We also discuss the current challenges and suggest directions for future cross-disciplinary research on the effectiveness and other dimensions of NLP-based adaptive support for peer-feedback. Building on our suggested framework, future research and collaborations at the intersection of education and NLP can innovate peer-feedback in digital learning environments

    Transformer-based Argument Mining for Healthcare Applications

    Get PDF
    International audienceArgument(ation) Mining (AM) typically aims at identifying argumentative components in text and predicting the relations among them. Evidence-based decision making in the health-care domain targets at supporting clinicians in their deliberation process to establish the best course of action for the case under evaluation. Although the reasoning stage of this kind of frameworks received considerable attention, little effort has been devoted to the mining stage. We extended an existing dataset by annotating 500 abstracts of Randomized Controlled Trials (RCT) from the MEDLINE database, leading to a dataset of 4198 argument components and 2601 argument relations on different diseases (i.e., neoplasm, glau-coma, hepatitis, diabetes, hypertension). We propose a complete argument mining pipeline for RCTs, classifying argument components as evidence and claims, and predicting the relation, i.e., attack or support , holding between those argument components. We experiment with deep bidirectional transformers in combination with different neural architectures (i.e., LSTM, GRU and CRF) and obtain a macro F1-score of .87 for component detection and .68 for relation prediction , outperforming current state-of-the-art end-to-end AM systems

    Argumentation models and their use in corpus annotation: practice, prospects, and challenges

    Get PDF
    The study of argumentation is transversal to several research domains, from philosophy to linguistics, from the law to computer science and artificial intelligence. In discourse analysis, several distinct models have been proposed to harness argumentation, each with a different focus or aim. To analyze the use of argumentation in natural language, several corpora annotation efforts have been carried out, with a more or less explicit grounding on one of such theoretical argumentation models. In fact, given the recent growing interest in argument mining applications, argument-annotated corpora are crucial to train machine learning models in a supervised way. However, the proliferation of such corpora has led to a wide disparity in the granularity of the argument annotations employed. In this paper, we review the most relevant theoretical argumentation models, after which we survey argument annotation projects closely following those theoretical models. We also highlight the main simplifications that are often introduced in practice. Furthermore, we glimpse other annotation efforts that are not so theoretically grounded but instead follow a shallower approach. It turns out that most argument annotation projects make their own assumptions and simplifications, both in terms of the textual genre they focus on and in terms of adapting the adopted theoretical argumentation model for their own agenda. Issues of compatibility among argument-annotated corpora are discussed by looking at the problem from a syntactical, semantic, and practical perspective. Finally, we discuss current and prospective applications of models that take advantage of argument-annotated corpora
    corecore