835 research outputs found

    Exploring Automated Essay Scoring Models for Multiple Corpora and Topical Component Extraction from Student Essays

    Get PDF
    Since it is a widely accepted notion that human essay grading is labor-intensive, automatic scoring method has drawn more attention. It reduces reliance on human effort and subjectivity over time and has commercial benefits for standardized aptitude tests. Automated essay scoring could be defined as a method for grading student essays, which is based on high inter-agreement with human grader, if they exist, and requires no human effort during the process. This research mainly focuses on improving existing Automated Essay Scoring (AES) models with different technologies. We present three different scoring models for grading two corpora: the Response to Text Assessment (RTA) and the Automated Student Assessment Prize (ASAP). First of all, a traditional machine learning model that extracts features based on semantic similarity measurement is employed for grading the RTA task. Secondly, a neural network model with the co-attention mechanism is used for grading sourced-based writing tasks. Thirdly, we propose a hybrid model integrating the neural network model with hand-crafted features. Experiments show that the feature-based model outperforms its baseline, but a stand-alone neural network model significantly outperforms the feature-based model. Additionally, a hybrid model integrating the neural network model and hand-crafted features outperforms its baselines, especially in a cross-prompt experimental setting. Besides, we present two investigations of using the intermediate output of the neural network model for keywords and key phrases extraction from student essays and the source article. Experiments show that keywords and key phrases extracted by our models support the feature-based AES model, and human effort can be relieved by using automated essay quality signals during the training process

    A robust methodology for automated essay grading

    Get PDF
    None of the available automated essay grading systems can be used to grade essays according to the National Assessment Program – Literacy and Numeracy (NAPLAN) analytic scoring rubric used in Australia. This thesis is a humble effort to address this limitation. The objective of this thesis is to develop a robust methodology for automatically grading essays based on the NAPLAN rubric by using heuristics and rules based on English language and neural network modelling

    Automated essay scoring in applied games:Reducing the teacher bandwidth problem in online training

    Get PDF
    This paper presents a methodology for applying automated essay scoring in educational settings. The methodology was tested and validated on a dataset of 173 reports (in Dutch language) that students have created in an applied game on environmental policy. Natural Language Processing technologies from the ReaderBench framework were used to generate an extensive set of textual complexity indices for each of the reports. Afterwards, different machine learning algorithms were used to predict the scores. By combining binary classification (pass or fail) and a probabilistic model for precision, a trade-off can be made between validity of automated score prediction (precision) and the reduction of teacher workload required for manual assessment. It was found from the sample that substantial workload reduction can be achieved, while preserving high precision: allowing for a precision of 95% or higher would already reduce the teacher’s workload to 74%; lowering precision to 80% produces a workload reduction of 50%
    corecore