4 research outputs found

    Using Natural Language Processing to Increase Modularity and Interpretability of Automated Essay Evaluation and Student Feedback

    Get PDF
    For English teachers and students who are dissatisfied with the one-size-fits-all approach of current Automated Essay Scoring (AES) systems, this research uses Natural Language Processing (NLP) techniques that provide a focus on configurability and interpretability. Unlike traditional AES models which are designed to provide an overall score based on pre-trained criteria, this tool allows teachers to tailor feedback based upon specific focus areas. The tool implements a user-interface that serves as a customizable rubric. Students’ essays are inputted into the tool either by the student or by the teacher via the application’s user-interface. Based on the rubric settings, the tool evaluates the essay and provides instant feedback. In addition to rubric-based feedback, the tool also implements a Multi-Armed Bandit recommender engine to suggest educational resources to the student that align with the rubric. Thus, reducing the amount of time teachers spend grading essay drafts and re-teaching. The tool developed and deployed as part of this research reduces the burden on teachers and provides instant, customizable feedback to students. Our minimum estimation for time savings to students and teachers is 117 hours per semester. The effectiveness of the feedback criteria for predicting if an essay was proficient or needs improvement was measured using recall. The recall for the model built for the persuasive essays was 0.96 and 0.86 for the source dependent essay model

    Identificación del nivel de complejidad de texto para el entrenamiento de chatbots basado en Machine Learning: una revisión de literatura|

    Get PDF
    El nivel de complejidad textual puede ser un inconveniente para algunas personas al momento de usar Chatbots, debido a que estos programas podrían dar respuestas cuyo nivel de complejidad no sea el que entienda el usuario. Entonces, aquellos Chatbots deberían ser entrenados con un conjunto de datos cuya complejidad textual sea la deseada, para evitar confusiones con los usuarios. Para ello, se define una revisión sistemática, en la cual se usan las bases de datos de Google Scholar, ACM Digital Library e IEEE Xplore, de las cuáles se obtiene la información necesaria empleando las palabras claves definidas por el método PICOC, obteniendo un total de treinta y ocho documentos que evidencian la existencia de distintas métricas para analizar la complejidad textual de textos, así como experimentos de entrenamiento con Chatbots y los correspondientes resultados de sus interacciones con los usuarios. Además, analizando documentos de tesis asociadas al tema de investigación, se refuerzan los conceptos de que la complejidad textual puede ser analizado mediante conjunto de métricas. Finalmente, en base a lo desarrollado en la revisión de la literatura y documentos de tesis, se presentan las conclusiones deducidas.Trabajo de investigació

    Towards Context-Aware Automated Writing Evaluation Systems

    No full text
    Writing is a crucial skill in our society, which is regularly exerted by students across all disciplines. Automated essay scoring and automatic writing evaluation systems can support professors in the evaluation of written texts and, conversely, help students improving their writing. However, most of those systems fail to consider the context of the writing, such as the targeted audience and the genre. In this paper, we depict our vision towards new-generation AES systems that could evaluate written products while considering their specific context. In education, such tools could support students not only in adapting their written product to their particular context, but also in identifying points for improvement and situational settings where their writing is less proficient.info:eu-repo/semantics/inPres
    corecore