4,145 research outputs found

    Write Free or Die: Vol. 02, No. 01

    Get PDF
    Robotics and Writing, Page 1 Upcoming Events, Page 1 Writing Committee Members, Page 2 Dangling Modifier, Page 3 Ask Sarah, Page 4 Les Perelman at UNH, Page 5 Faculty Retreat, Page 6 Faculty Resources, Page 6 Past Perfect, Page 7 Grammar Box, Page

    Write Free or Die: Vol. 02, No. 01

    Get PDF
    Robotics and Writing, Page 1 Upcoming Events, Page 1 Writing Committee Members, Page 2 Dangling Modifier, Page 3 Ask Sarah, Page 4 Les Perelman at UNH, Page 5 Faculty Retreat, Page 6 Faculty Resources, Page 6 Past Perfect, Page 7 Grammar Box, Page

    Prompt- and Trait Relation-aware Cross-prompt Essay Trait Scoring

    Full text link
    Automated essay scoring (AES) aims to score essays written for a given prompt, which defines the writing topic. Most existing AES systems assume to grade essays of the same prompt as used in training and assign only a holistic score. However, such settings conflict with real-education situations; pre-graded essays for a particular prompt are lacking, and detailed trait scores of sub-rubrics are required. Thus, predicting various trait scores of unseen-prompt essays (called cross-prompt essay trait scoring) is a remaining challenge of AES. In this paper, we propose a robust model: prompt- and trait relation-aware cross-prompt essay trait scorer. We encode prompt-aware essay representation by essay-prompt attention and utilizing the topic-coherence feature extracted by the topic-modeling mechanism without access to labeled data; therefore, our model considers the prompt adherence of an essay, even in a cross-prompt setting. To facilitate multi-trait scoring, we design trait-similarity loss that encapsulates the correlations of traits. Experiments prove the efficacy of our model, showing state-of-the-art results for all prompts and traits. Significant improvements in low-resource-prompt and inferior traits further indicate our model's strength.Comment: Accepted at ACL 2023 (Findings, long paper

    An Exploratory Application of Rhetorical Structure Theory to Detect Coherence Errors in L2 English Writing: Possible Implications for Automated Writing Evaluation Software

    Get PDF
    This paper presents an initial attempt to examine whether Rhetorical Structure Theory (RST) (Mann & Thompson, 1988) can be fruitfully applied to the detection of the coherence errors made by Taiwanese low-intermediate learners of English. This investigation is considered warranted for three reasons. First, other methods for bottom-up coherence analysis have proved ineffective (e.g., Watson Todd et al., 2007). Second, this research provides a preliminary categorization of the coherence errors made by first language (L1) Chinese learners of English. Third, second language discourse errors in general have received little attention in applied linguistic research. The data are 45 written samples from the LTTC English Learner Corpus, a Taiwanese learner corpus of English currently under construction. The rationale of this study is that diagrams which violate some of the rules of RST diagram formation will point to coherence errors. No reliability test has been conducted since this work is at an initial stage. Therefore, this study is exploratory and results are preliminary. Results are discussed in terms of the practicality of using this method to detect coherence errors, their possible consequences about claims for a typical inductive content order in the writing of L1 Chinese learners of English, and their potential implications for Automated Writing Evaluation (AWE) software, since discourse organization is one of the essay characteristics assessed by this software. In particular, the extent to which the kinds of errors detected through the RST analysis match those located by Criterion (Burstein, Chodorow, & Leachock, 2004), a well-known AWE software by Educational Testing Service (ETS), is discussed

    Automated Essay Evaluation Using Natural Language Processing and Machine Learning

    Get PDF
    The goal of automated essay evaluation is to assign grades to essays and provide feedback using computers. Automated evaluation is increasingly being used in classrooms and online exams. The aim of this project is to develop machine learning models for performing automated essay scoring and evaluate their performance. In this research, a publicly available essay data set was used to train and test the efficacy of the adopted techniques. Natural language processing techniques were used to extract features from essays in the dataset. Three different existing machine learning algorithms were used on the chosen dataset. The data was divided into two parts: training data and testing data. The inter-rater reliability and performance of these models were compared with each other and with human graders. Among the three machine learning models, the random forest performed the best in terms of agreement with human scorers as it achieved the lowest mean absolute error for the test dataset

    Beyond the design of automated writing evaluation: Pedagogical practices and perceived learning effectiveness in EFL writing classes

    Get PDF
    Automated writing evaluation (AWE) software is designed to provide instant computer-generated scores for a submitted essay along with diagnostic feedback. Most studies on AWE have been conducted on psychometric evaluations of its validity; however, studies on how effectively AWE is used in writing classes as a pedagogical tool are limited. This study employs a naturalistic classroom-based approach to explore the interaction between how an AWE program, MY Access!, was implemented in three different ways in three EFL college writing classes in Taiwanand how students perceived its effectiveness in improving writing. The findings show that, although the implementation of AWE was not in general perceived very positively by the three classes, it was perceived comparatively more favorably when the program was used to facilitate students’ early drafting and revising process, followed by human feedback from both the teacher and peers during the later process. This study also reveals that the autonomous use of AWE as a surrogate writing coach with minimal human facilitation caused frustration to students and limited their learning of writing. In addition, teachers’ attitudes toward AWE use and their technology-use skills, as well as students’ learner characteristics and goals for learning to write, may also play vital roles in determining the effectiveness of AWE. With limitations inherent in the design of AWE technology, language teachers need to be more critically aware that the implementation of AWE requires well thought-out pedagogical designs and thorough considerations for its relevance to the objectives of the learning of writing

    論述における談話構造および論理構造の解析

    Get PDF
    Tohoku University博士(情報科学)thesi

    Formative assessment feedback to enhance the writing performance of Iranian IELTS candidates: Blending teacher and automated writing evaluation

    Get PDF
    With the incremental integration of technology in writing assessment, technology-generated feedback has found its way to take further steps toward replacing human corrective feedback and rating. Yet, further investigation is deemed necessary regarding its potential use either as a supplement to or replacement for human feedback. This study aims to investigate the effect of blending teacher and automated writing evaluation, as formative assessment feedback, on enhancing the writing performance among Iranian IELTS candidates. In this explanatory mixed-methods research, three groups of Iranian intermediate learners (N=31) completed six IELTS writing tasks during six consecutive weeks and received automated, teacher, and blended (automated + teacher) feedback modes respectively on different components of writing (task response, coherence and cohesion, lexical resource, grammatical range and accuracy). A structured written interview was also conducted to explore learners’ perception (attitude, clarity, preference) of the mode of feedback they received. Findings revealed that students who received teacher-only and blended feedback performed better in writing. Also, the blended feedback group outperformed the others regarding task response, the teacher feedback group in cohesion and coherence, and the automated feedback group in lexical resource. The analysis of the interviews revealed that the majority of the learners confirmed the clarity of all feedback modes and learners’ attitude about feedback modes was positive although they highly preferred the blended one. The findings suggest new ideas to facilitate learning and assessing writing and support the evidence that teachers can provide comprehensive, accurate, and continuous feedback as a means of formative assessment
    corecore