3,951 research outputs found
An exploratory study into automated précis grading
Automated writing evaluation is a popular research field, but the main focus has been on evaluating argumentative essays. In this paper, we consider a different genre, namely précis texts. A précis is a written text that provides a coherent summary of main points of a spoken or written text. We present a corpus of English précis texts which all received a grade assigned by a highly-experienced English language teacher and were subsequently annotated following an exhaustive error typology. With this corpus we trained a machine learning model which relies on a number of linguistic, automatic summarization and AWE features. Our results reveal that this model is able to predict the grade of précis texts with only a moderate error margin
Rubric-Specific Approach to Automated Essay Scoring with Augmentation Training
Neural based approaches to automatic evaluation of subjective responses have
shown superior performance and efficiency compared to traditional rule-based
and feature engineering oriented solutions. However, it remains unclear whether
the suggested neural solutions are sufficient replacements of human raters as
we find recent works do not properly account for rubric items that are
essential for automated essay scoring during model training and validation. In
this paper, we propose a series of data augmentation operations that train and
test an automated scoring model to learn features and functions overlooked by
previous works while still achieving state-of-the-art performance in the
Automated Student Assessment Prize dataset.Comment: 13 page
A Statistical Approach to Automatic Essay Scoring
Η ολοένα αυξανόμενη ανάγκη για αξιολόγηση των δεξιοτήτων γραπτού λόγου, σε συνδυασμό με την δυναμική της αυτόματης αξιολόγησης γραπτού λόγου να συνδράμει στην διδασκαλία και εκμάθηση, αλλά και την αξιολόγηση γραπτού λόγου, η παρούσα μελέτη στοχεύει στη διερεύνηση της σχέσης ανάμεσα σε υφομετρικά χαρακτηριστικά των κειμένων, άρρηκτα συνδεδεμένων με την αυτόματη αξιολόγηση γραπτού λόγου, και τον βαθμό καλλιέργειας δεξιοτήτων γραπτής έκφρασης εκ μέρους των μαθητών, όπως αυτός αποτυπώνεται στην αξιολόγηση μαθητικών εκθέσεων από εξειδικευμένους αξιολογητές. Το υπό ανάλυση σώμα κειμένων ανακτήθηκε από βάση δεδομένων προσφερόμενων στα πλαίσια πρόσφατου διαγωνισμού αυτόματης αξιολόγησης γραπτού λόγου, που πραγματοποιήθηκε στο σχολικό περιβάλλον των ΗΠΑ. Τα υφομετρικά χαρακτηριστικά των κειμένων που λήφθηκαν υπόψη στην παρούσα ανάλυση εστιάζουν κυρίως σε ενδείκτες συνοχής του κειμένου, λεξιλογικού πλούτου και εύρους μορφοσυντακτικών επιλογών. Από την παρούσα ανάλυση διαφαίνεται άμεση σχέση υφομετρικών χαρακτηριστικών των υπό ανάλυση κειμένων με την αξιολόγηση της οποίας έτυχαν στα πλαίσια του προαναφερθέντος διαγωνισμού. Το εύρημα αυτό καταδεικνύει την καίρια σημασία εντατικοποίησης της σχετικής πειραματικής διερεύνησης, με στόχο την βελτιστοποίηση της εναλλακτικής αυτής μορφής υποστήριξης των εμπλεκομένων στην διδακτική και εξεταστική διαδικασία.Taking into consideration escalating need for testing writing ability and the potential of Automatic Essay Scoring (AES) to support writing instruction and evaluation, the aim of the present study is to explore the relationship between stylometric indices, widely used in AES systems, and the degree of sophistication of learner essays, captured by the score provided by expert human raters. The data analyzed were obtained from a recently organized public AES competition and comprise persuasive essays written in the context of public school in the United States. Stylometric information taken into consideration greatly focuses on measures of cohesion, as well as lexical diversity and syntactic sophistication. Results indicate a clear relationship between quantifiable features of learners’ written responses and the impression which they have made on expert raters. This observation reinforces the importance of pursuing further experimentation into AES, which would yield significant educational and social benefits
Exploring relationships between automated and human evaluations of L2 texts
Despite the current potential to use computers to automatically generate a large range of text-based indices, many issues remain unresolved about how to apply these data in established language teaching and assessment contexts. One way to resolve these issues is to explore the degree to which automatically generated indices, which are reflective of key measures of text quality, align with parallel measures derived from locally relevant, human evaluations of texts. This study describes the automated evaluation of 104 English as a second language texts through use of the computational tool Coh-Metrix, which was used to generate indices reflecting text cohesion, lexical characteristics, and syntactic complexity. The same texts were then independently evaluated by two experienced human assessors through use of an analytic scoring rubric. The interrelationships between the computer and human generated evaluations of the texts are presented in this paper with a particular focus on the automatically generated indices that were most strongly linked to the human generated measures. A synthesis of these findings is then used to discuss the role that such automated evaluation may have in the teaching and assessment of second language writing
Recommended from our members
Neural approaches to discourse coherence: modeling, evaluation and application
Discourse coherence is an important aspect of text quality that refers to the way different textual units relate to each other. In this thesis, I investigate neural approaches to modeling discourse coherence. I present a multi-task neural network where the main task is to predict a document-level coherence score and the secondary task is to learn word-level syntactic features. Additionally, I examine the effect of using contextualised word representations in single-task and multi-task setups. I evaluate my models on a synthetic dataset where incoherent documents are created by shuffling the sentence order in coherent original documents. The results show the efficacy of my multi-task learning approach, particularly when enhanced with contextualised embeddings, achieving new state-of-the-art results in ranking the coherent documents higher than the incoherent ones (96.9%). Furthermore, I apply my approach to the realistic domain of people’s everyday writing, such as emails and online posts, and further demonstrate its ability to capture various degrees of coherence. In order to further investigate the linguistic properties captured by coherence models, I create two datasets that exhibit syntactic and semantic alterations. Evaluating different models on these datasets reveals their ability to capture syntactic perturbations but their inadequacy to detect semantic changes. I find that semantic alterations are instead captured by models that first build sentence representations from averaged word embeddings, then apply a set of linear transformations over input sentence pairs. Finally, I present an application for coherence models in the pedagogical domain. I first demonstrate that state of-the-art neural approaches to automated essay scoring (AES) are not robust to adversarially created, grammatical, but incoherent sequences of sentences. Accordingly, I propose a framework for integrating and jointly training a coherence model with a state-of-the-art neural AES system in order to enhance its ability to detect such adversarial input. I show that this joint framework maintains a performance comparable to the state-of-the-art AES system in predicting a holistic essay score while significantly outperforming it in adversarial detection
Technology and Testing
From early answer sheets filled in with number 2 pencils, to tests administered by mainframe computers, to assessments wholly constructed by computers, it is clear that technology is changing the field of educational and psychological measurement. The numerous and rapid advances have immediate impact on test creators, assessment professionals, and those who implement and analyze assessments. This comprehensive new volume brings together leading experts on the issues posed by technological applications in testing, with chapters on game-based assessment, testing with simulations, video assessment, computerized test development, large-scale test delivery, model choice, validity, and error issues. Including an overview of existing literature and ground-breaking research, each chapter considers the technological, practical, and ethical considerations of this rapidly-changing area. Ideal for researchers and professionals in testing and assessment, Technology and Testing provides a critical and in-depth look at one of the most pressing topics in educational testing today
- …