10,642 research outputs found

    Automated assessment of non-native learner essays: Investigating the role of linguistic features

    Get PDF
    Automatic essay scoring (AES) refers to the process of scoring free text responses to given prompts, considering human grader scores as the gold standard. Writing such essays is an essential component of many language and aptitude exams. Hence, AES became an active and established area of research, and there are many proprietary systems used in real life applications today. However, not much is known about which specific linguistic features are useful for prediction and how much of this is consistent across datasets. This article addresses that by exploring the role of various linguistic features in automatic essay scoring using two publicly available datasets of non-native English essays written in test taking scenarios. The linguistic properties are modeled by encoding lexical, syntactic, discourse and error types of learner language in the feature set. Predictive models are then developed using these features on both datasets and the most predictive features are compared. While the results show that the feature set used results in good predictive models with both datasets, the question "what are the most predictive features?" has a different answer for each dataset.Comment: Article accepted for publication at: International Journal of Artificial Intelligence in Education (IJAIED). To appear in early 2017 (journal url: http://www.springer.com/computer/ai/journal/40593

    Technology-supported assessment

    Get PDF

    Concepts in Computer Aided Essay Assessment: Improving Consistency by Monitoring the Assessors

    Get PDF
    This paper focuses on a traditional educational skill, namely the assessment of student work. Whereas ICT has left a considerable mark on, for instance, the administrative support of educational activities and on the use of legal sources, other parts of legal training have seen almost no alterations in the past few decades. One such area is the grading of essay or open question student assignments. The CODAS Text Grader tool, described here, can be used to alleviate the task of marking this type of student work. Teachers still have an essential role with this. What changes is the ‘level’ at which the assessment of the student work takes place: from individual to survey, from marking to ranking. The CODAS software can also be used for a related task: it can assess the assessors. It contains functions to assess – and if necessary correct – the marks for comparable essays awarded by one specific teacher or even by a team of several teachers. Bringing transparency to the process of grading can only be of benefit to the education system.This paper was originally published in the proceedings of the 2005 BILETA conference (Belfast, 6-9 April 2005)

    Continual Evolution: The Experience Over Three Semesters of a Librarian Embedded in an Online Evidence-Based Medicine Course for Physician Assistant Students

    Get PDF
    This column examines the experience, over three years, of a librarian embedded in an online Epidemiology and Evidence-based Medicine course, which is a requirement for students pursing a Master of Science in Physician Assistant Studies at Pace University. Student learning outcomes were determined, a video lecture was created, and student learning was assessed via a five-point test during year one. For years two and three, the course instructor asked the librarian to be responsible for two weeks of the course instruction and a total of 15 out of 100 possible points for the course. This gave the librarian flexibility to measure additional outcomes and gather more in-depth assessment data. The librarian then used the assessment data to target areas for improvement in the lessons and Blackboard tests. Revisions made by the librarian positively affected student achievement of learning outcomes, as measured by the assessment conducted the subsequent semester. Plans for further changes are also discussed

    Using Ontology-based Information Extraction for Subject-based Auto-grading

    Get PDF
    The procedure for the grading of students’ essays in subject-based examinations is quite challenging particularly when dealing with large number of students. Hence, several automatic essay-grading systems have been designed to alleviate the demands of manual subject grading. However, relatively few of the existing systems are able to give informative feedbacks that are based on elaborate domain knowledge to students, particularly in subject-based automatic grading where domain knowledge is a major factor. In this work, we discuss the vision of subject-based automatic essay scoring system that leverages on semiautomatic creation of subject ontology, uses ontology-based information extraction approach to enable automatic essay scoring, and gives informative feedback to students

    Assessing Student Learning Through Keyword Density Analysis of Online Class Messages

    Get PDF
    • …
    corecore