764 research outputs found

    Learning from Auxiliary Sources in Argumentative Revision Classification

    Full text link
    We develop models to classify desirable reasoning revisions in argumentative writing. We explore two approaches -- multi-task learning and transfer learning -- to take advantage of auxiliary sources of revision data for similar tasks. Results of intrinsic and extrinsic evaluations show that both approaches can indeed improve classifier performance over baselines. While multi-task learning shows that training on different sources of data at the same time may improve performance, transfer-learning better represents the relationship between the data

    Identifying Editor Roles in Argumentative Writing from Student Revision Histories

    Full text link
    We present a method for identifying editor roles from students' revision behaviors during argumentative writing. We first develop a method for applying a topic modeling algorithm to identify a set of editor roles from a vocabulary capturing three aspects of student revision behaviors: operation, purpose, and position. We validate the identified roles by showing that modeling the editor roles that students take when revising a paper not only accounts for the variance in revision purposes in our data, but also relates to writing improvement

    Predicting the Quality of Revisions in Argumentative Writing

    Full text link
    The ability to revise in response to feedback is critical to students' writing success. In the case of argument writing in specific, identifying whether an argument revision (AR) is successful or not is a complex problem because AR quality is dependent on the overall content of an argument. For example, adding the same evidence sentence could strengthen or weaken existing claims in different argument contexts (ACs). To address this issue we developed Chain-of-Thought prompts to facilitate ChatGPT-generated ACs for AR quality predictions. The experiments on two corpora, our annotated elementary essays and existing college essays benchmark, demonstrate the superiority of the proposed ACs over baselines.Comment: In The 18th BEA Workshop, held in conjunction with The Association for Computational Linguistics (ACL), July 202

    Text revision in Scientific Writing Assistance: An Overview

    Full text link
    Writing a scientific article is a challenging task as it is a highly codified genre. Good writing skills are essential to properly convey ideas and results of research work. Since the majority of scientific articles are currently written in English, this exercise is all the more difficult for non-native English speakers as they additionally have to face language issues. This article aims to provide an overview of text revision in writing assistance in the scientific domain. We will examine the specificities of scientific writing, including the format and conventions commonly used in research articles. Additionally, this overview will explore the various types of writing assistance tools available for text revision. Despite the evolution of the technology behind these tools through the years, from rule-based approaches to deep neural-based ones, challenges still exist (tools' accessibility, limited consideration of the context, inexplicit use of discursive information, etc.)Comment: Published at 13th International Workshop on Bibliometric-enhanced Information Retrieval 12 page

    Argumentation Element Annotation Modeling using XLNet

    Full text link
    This study demonstrates the effectiveness of XLNet, a transformer-based language model, for annotating argumentative elements in persuasive essays. XLNet's architecture incorporates a recurrent mechanism that allows it to model long-term dependencies in lengthy texts. Fine-tuned XLNet models were applied to three datasets annotated with different schemes - a proprietary dataset using the Annotations for Revisions and Reflections on Writing (ARROW) scheme, the PERSUADE corpus, and the Argument Annotated Essays (AAE) dataset. The XLNet models achieved strong performance across all datasets, even surpassing human agreement levels in some cases. This shows XLNet capably handles diverse annotation schemes and lengthy essays. Comparisons between the model outputs on different datasets also revealed insights into the relationships between the annotation tags. Overall, XLNet's strong performance on modeling argumentative structures across diverse datasets highlights its suitability for providing automated feedback on essay organization.Comment: 28 page

    Mind the Gap: Automated Corpus Creation for Enthymeme Detection and Reconstruction in Learner Arguments

    Full text link
    Writing strong arguments can be challenging for learners. It requires to select and arrange multiple argumentative discourse units (ADUs) in a logical and coherent way as well as to decide which ADUs to leave implicit, so called enthymemes. However, when important ADUs are missing, readers might not be able to follow the reasoning or understand the argument's main point. This paper introduces two new tasks for learner arguments: to identify gaps in arguments (enthymeme detection) and to fill such gaps (enthymeme reconstruction). Approaches to both tasks may help learners improve their argument quality. We study how corpora for these tasks can be created automatically by deleting ADUs from an argumentative text that are central to the argument and its quality, while maintaining the text's naturalness. Based on the ICLEv3 corpus of argumentative learner essays, we create 40,089 argument instances for enthymeme detection and reconstruction. Through manual studies, we provide evidence that the proposed corpus creation process leads to the desired quality reduction, and results in arguments that are similarly natural to those written by learners. Finally, first baseline approaches to enthymeme detection and reconstruction demonstrate the corpus' usefulness.Comment: Accepted to Findings of EMNLP 202

    TOWARDS BUILDING AN INTELLIGENT REVISION ASSISTANT FOR ARGUMENTATIVE WRITINGS

    Get PDF
    Current intelligent writing assistance tools (e.g. Grammarly, Turnitin, etc.) typically work by locating the problems of essays for users (grammar, spelling, argument, etc.) and providing possible solutions. These tools focus on providing feedback on a single draft, while ignoring feedback on an author’s changes between drafts (revision). This thesis argues that it is also important to provide feedback on authors’ revision, as such information can not only improve the quality of the writing but also improve the rewriting skill of the authors. Thus, it is desirable to build an intelligent assistant that focuses on providing feedback to revisions. This thesis presents work from two perspectives towards the building of such an assistant: 1) a study of the revision’s impact on writings, which includes the development of a sentence-level revision schema, the annotation of corpora based on the schema and data analysis on the created corpora; a prototype revision assistant was built to provide revision feedback based on the schema and a user study was conducted to investigate whether the assistant could influence the users’ rewriting behaviors. 2) the development of algorithms for automatic revision identification, which includes the automatic extraction of the revised content and the automatic classification of revision types; we first investigated the two problems separately in a pipeline manner and then explored a joint approach that solves the two problems at the same time
    • …
    corecore