540 research outputs found

    Exploring Automated Essay Scoring for Nonnative English Speakers

    Full text link
    Automated Essay Scoring (AES) has been quite popular and is being widely used. However, lack of appropriate methodology for rating nonnative English speakers' essays has meant a lopsided advancement in this field. In this paper, we report initial results of our experiments with nonnative AES that learns from manual evaluation of nonnative essays. For this purpose, we conducted an exercise in which essays written by nonnative English speakers in test environment were rated both manually and by the automated system designed for the experiment. In the process, we experimented with a few features to learn about nuances linked to nonnative evaluation. The proposed methodology of automated essay evaluation has yielded a correlation coefficient of 0.750 with the manual evaluation.Comment: Accepted for publication at EUROPHRAS 201

    Development of an Automated Scoring Model Using SentenceTransformers for Discussion Forums in Online Learning Environments

    Get PDF
    Due to the limitations of public datasets, research on automatic essay scoring in Indonesian has been restrained and resulted in suboptimal accuracy. In general, the main goal of the essay scoring system is to improve execution time, which is usually done manually with human judgment. This study uses a discussion forum in online learning to generate an assessment between the responses and the lecturer\u27s rubric in the automated essay scoring. A SentenceTransformers pre-trained model that can construct the highest vector embedding was proposed to identify the semantic meaning between the responses and the lecturer\u27s rubric. The effectiveness of monolingual and multilingual models was compared. This research aims to determine the model\u27s effectiveness and the appropriate model for the Automated Essay Scoring (AES) used in paired sentence Natural Language Processing tasks. The distiluse-base-multilingual-cased-v1 model, which employed the Pearson correlation method, obtained the highest performance. Specifically, it obtained a correlation value of 0.63 and a mean absolute error (MAE) score of 0.70. It indicates that the overall prediction result is enhanced when compared to the earlier regression task research

    Interactive on-line formative evaluation of student assignments

    Get PDF
    Automated Essay Grading (AEG) technology has been maturing over the final decades of the last century to the point where it is now poised to permit a transition in ?assessment-thinking?. The administrative convenience of using objective testing to attempt to assess deep learning, learning at the conceptual level, has now been obviated by efficient and effective automated means to assess student learning. Further, the new generation AEG systems such as MarkIT deliver an unprecedented interactive formative assessment feedback capability, which is set to transform individualized learning and instruction as implemented in existing Learning Management Systems (LMS)

    A robust methodology for automated essay grading

    Get PDF
    None of the available automated essay grading systems can be used to grade essays according to the National Assessment Program – Literacy and Numeracy (NAPLAN) analytic scoring rubric used in Australia. This thesis is a humble effort to address this limitation. The objective of this thesis is to develop a robust methodology for automatically grading essays based on the NAPLAN rubric by using heuristics and rules based on English language and neural network modelling

    Automated scholarly paper review: Technologies and challenges

    Full text link
    Peer review is a widely accepted mechanism for research evaluation, playing a pivotal role in scholarly publishing. However, criticisms have long been leveled on this mechanism, mostly because of its inefficiency and subjectivity. Recent years have seen the application of artificial intelligence (AI) in assisting the peer review process. Nonetheless, with the involvement of humans, such limitations remain inevitable. In this review paper, we propose the concept and pipeline of automated scholarly paper review (ASPR) and review the relevant literature and technologies of achieving a full-scale computerized review process. On the basis of the review and discussion, we conclude that there is already corresponding research and implementation at each stage of ASPR. We further look into the challenges in ASPR with the existing technologies. The major difficulties lie in imperfect document parsing and representation, inadequate data, defective human-computer interaction and flawed deep logical reasoning. Moreover, we discuss the possible moral & ethical issues and point out the future directions of ASPR. In the foreseeable future, ASPR and peer review will coexist in a reinforcing manner before ASPR is able to fully undertake the reviewing workload from humans

    Are Automatically Identified Reading Strategies Reliable Predictors of Comprehension?

    Get PDF
    International audienceIn order to build coherent textual representations, readers use cognitive procedures and processes referred to as reading strategies; these specific procedures can be elicited through self-explanations in order to improve understanding. In addition, when faced with comprehension difficulties, learners can invoke regulation processes, also part of reading strategies, for facilitating the understanding of a text. Starting from these observations, several automated techniques have been developed in order to support learners in terms of efficiency and focus on the actual comprehension of the learning material. Our aim is to go one step further and determine how automatically identified reading strategies employed by pupils with age between 8 and 11 years can be related to their overall level of understanding. Multiple classifiers based on Support Vector Machines are built using the strategies' identification heuristics in order to create an integrated model capable of predicting the learner's comprehension level
    • …
    corecore