2,107 research outputs found

    Towards the use of Semi-structured Annotators for Automated Essay Grading

    Get PDF
    The amount of time teachers spend grading essays has increased over the past decade, prompting the development of systems that are able to lighten the workload. Many systems have thus far used linear regression or semisupervised methods towards this objective. This paper discusses some of the main Automated Essay Grading systems, highlighting some of their strengths and weaknesses, in addition to providing a brief overview of Text Mining and meta-data annotation techniques that could be used to facilitate the process of grading essays through an automated system

    The Effect of Text Summarization in Essay Scoring (Case Study: Teach on E-Learning)

    Get PDF
    The development of automated essay scoring (AES) in the neural network (NN) approach has eliminated feature engineering. However, feature engineering is still needed, moreover, data with labels in the form of rubric scores, which are complementary to AES holistic scores, are still rarely found. In general, data without labels/scores is found more. However, unsupervised AES research has not progressed with the more common use of publicly labeled data. Based on the case studies adopted in the research, automatic text summarization (ATS) was used as a feature engineering model of AES and readability index as the definition of rubric values for data without labels.This research focuses on developing AES by implementing ATS results on SOM and HDBSCAN. The data used in this research are 403 documents of TEACH ON E-learning essays. Data is represented in the form of a combination of word vectors and a readability index. Based on the tests and measurements carried out, it was concluded that AES with ATS implementation had no good potential for the assessment of TEACH ON essays in increasing the silhouette score. The model produces the best silhouette score of 0.727286113 with original essay data

    論述における談話構造および論理構造の解析

    Get PDF
    Tohoku University博士(情報科学)thesi

    Sobre los efectos de combinar Análisis Semántico Latente con otras técnicas de procesamiento de lenguaje natural para la evaluación de preguntas abiertas

    Full text link
    Este artículo presenta la combinación de Análisis Semántico Latente (LSA) con otras técnicas de procesamiento del lenguaje natural (lematización, eliminación de palabras funcionales y desambiguación de sentidos) para mejorar la evaluación automática de respuestas en texto libre. El sistema de evaluación de respuestas en texto libre llamado Atenea (Alfonseca & Pérez, 2004) ha servido de marco experimental para probar el esquema combinacional. Atenea es un sistema capaz de realizar preguntas, escogidas aleatoriamente o bien conforme al perfil del estudiante, y asignarles una calificación numérica. Los resultados de los experimentos demuestran que para todos los conjuntos de datos en los que las técnicas de PLN se han combinado con LSA la correlación de Pearson entre las notas dadas por Atenea y las notas dadas por los profesores para el mismo conjunto de preguntas mejora. La causa puede encontrarse en la complementariedad entre LSA, que trabaja a un nivel semántico superficial, y el resto de las técnicas NLP usadas en Atenea, que están más centradas en los niveles léxico y sintáctico.This article presents the combination of Latent Semantic Analysis (LSA) with other natural language processing techniques (stemming, removal of closed-class words and word sense disambiguation) to improve the automatic assessment of students' free-text answers. The combinational schema has been tested in the experimental framework provided by the free-text Computer Assisted Assessment (CAA) system called Atenea (Alfonseca & Pérez, 2004). This system is able to ask randomly or according to the students' profile an open-ended question to the student and then, assign a score to it. The results prove that for all datasets, when the NLP techniques are combined with LSA, the Pearson correlation between the scores given by Atenea and the scores given by the teachers for the same dataset of questions improves. We believe that this is due to the complementarity between LSA, which works more at a shallow semantic level, and the rest of the NLP techniques used in Atenea, which are more focused on the lexical and syntactical levels

    Automatic Essay Scoring Systems Are Both Overstable And Oversensitive: Explaining Why And Proposing Defenses

    Get PDF
    Deep-learning based Automatic Essay Scoring (AES) systems are being actively used in various high-stake applications in education and testing. However, little research has been put to understand and interpret the black-box nature of deep-learning-based scoring algorithms. While previous studies indicate that scoring models can be easily fooled, in this paper, we explore the reason behind their surprising adversarial brittleness. We utilize recent advances in interpretability to find the extent to which features such as coherence, content, vocabulary, and relevance are important for automated scoring mechanisms. We use this to investigate the oversensitivity (i.e., large change in output score with a little change in input essay content) and overstability (i.e., little change in output scores with large changes in input essay content) of AES. Our results indicate that autoscoring models, despite getting trained as “end-to-end” models with rich contextual embeddings such as BERT, behave like bag-of-words models. A few words determine the essay score without the requirement of any context making the model largely overstable. This is in stark contrast to recent probing studies on pre-trained representation learning models, which show that rich linguistic features such as parts-of-speech and morphology are encoded by them. Further, we also find that the models have learnt dataset biases, making them oversensitive. The presence of a few words with high co-occurrence with a certain score class makes the model associate the essay sample with that score. This causes score changes in ∼95% of samples with an addition of only a few words. To deal with these issues, we propose detection-based protection models that can detect oversensitivity and samples causing overstability with high accuracies. We find that our proposed models are able to detect unusual attribution patterns and flag adversarial samples successfully

    (AI)N’T NOBODY HELPING ME? – DESIGN AND EVALUATION OF A MACHINE-LEARNING-BASED SEMI-AUTOMATIC ESSAY SCORING SYSTEM

    Get PDF
    Education is increasingly being delivered digitally these days, whether due to the COVID-19 pandemic or the growing popularity of MOOCs. The increasing number of participants poses a challenge for institutions to balance didactical and financial demands, especially for exams. The overall goal of this design science research project is to design and evaluate a semi-automatic machine-learning-based scoring system for essays. We focus on the design of a functional software artifact including the required design principles and an exemplary implementation of an algorithm, the optimization of which is, however, not part of this project and subject for future research. Our results show that such a system is suitable for both scoring essay assignments and documenting these scorings. In addition to the software artifact, we document our results using the work of Gregor et al. (2020). This provides a first step towards a design theory for semi-automatic, machine-learning-based scoring systems for essays
    corecore