Automated essay grading systems applied to a first year university subject: how can we do it better?

Abstract

Automated marking of assignments consisting of written text would doubtless be of advantage to teachers and education administrators alike. When large numbers of assignments are submitted at once, teachers find themselves bogged down in their attempt to provide consistent evaluations and high quality feedback to students within as short a timeframe as is reasonable, usually a matter of days rather than weeks. Educationaladministrators are also concerned with quality and timely feedback, but in addition must manage the cost of doing this work. Clearly an automated system would be a highly desirable addition to the educational tool-kit, particularly if it can provide less costly and more effective outcome.In this paper we present a description and evaluation of four automated essay grading systems. We then report on our trial of one of these systems which was undertaken at Curtin University of Technology in the first half of 2001. The purpose of the trial was to assess whether automated essay grading was feasible, economically viable and as accurate as manually grading the essays. Within the Curtin Business School we have not previously used automated grading systems but the benefit could be enormous giventhe very large numbers of students in some first year subjects.As we evaluate the results of our trial, a research and development direction is indicated which we believe will result in improvement over existing systems

    Similar works