25,022 research outputs found
An exploratory study into automated précis grading
Automated writing evaluation is a popular research field, but the main focus has been on evaluating argumentative essays. In this paper, we consider a different genre, namely précis texts. A précis is a written text that provides a coherent summary of main points of a spoken or written text. We present a corpus of English précis texts which all received a grade assigned by a highly-experienced English language teacher and were subsequently annotated following an exhaustive error typology. With this corpus we trained a machine learning model which relies on a number of linguistic, automatic summarization and AWE features. Our results reveal that this model is able to predict the grade of précis texts with only a moderate error margin
Exploring Automated Essay Scoring for Nonnative English Speakers
Automated Essay Scoring (AES) has been quite popular and is being widely
used. However, lack of appropriate methodology for rating nonnative English
speakers' essays has meant a lopsided advancement in this field. In this paper,
we report initial results of our experiments with nonnative AES that learns
from manual evaluation of nonnative essays. For this purpose, we conducted an
exercise in which essays written by nonnative English speakers in test
environment were rated both manually and by the automated system designed for
the experiment. In the process, we experimented with a few features to learn
about nuances linked to nonnative evaluation. The proposed methodology of
automated essay evaluation has yielded a correlation coefficient of 0.750 with
the manual evaluation.Comment: Accepted for publication at EUROPHRAS 201
Recommended from our members
Using student experience as a model for designing an automatic feedback system for short essays
The SAFeSEA project (Supportive Automated Feedback for Short Essay Answers) aims to develop an automated feedback system to support university students as they write summative essays. Empirical studies carried out in the initial phase of the system’s development illuminated students’ approaches to and understandings of the essay-writing process. Findings from these studies suggested that, regardless of their experience of higher education, students consider essay-writing as: 1) a sequential set of activities, 2) a process that is enhanced through particular sources of support and 3) a skill that requires the development of personal strategies. Further data collected from tutors offered insight into the feedback and reflection stages of essay-writing. These perspectives offered a fundamental model of essay-writing and feedback to inform the ongoing, iterative development of this automated feedback system and indeed, for any institution developing tools to support students’ writing
- …