5 research outputs found

    Comparative Study of Different Techniques for Automatic Evaluation of English Text Essays

    Get PDF
     Automated essay evaluation keeps to attract a lot of interest because of its educational and commercial importance as well as the related research challenges in the natural language processing field. Automated essay evaluation has the feature of halves, less cost of human resource, and gives the results directly and timely feedback compared with the human evaluator which requires more time and it depends on his /her mood at certain times. This paper has focused on automated evaluation of English text which was performed using various algorithms and techniques by making comparison between these techniques that applied with different size of dataset and length essays as well as the performance of algorithms was assessed using different metrics. The results uncovered that the performance of each technique has affected by the size of dataset and the length of essays. Finally, for future research directions building a standard dataset containing different types of question-answer pair to be able to compare the performance of different techniques fairly

    A Trustworthy Automated Short-Answer Scoring System Using a New Dataset and Hybrid Transfer Learning Method

    Get PDF
    To measure the quality of student learning, teachers must conduct evaluations. One of the most efficient modes of evaluation is the short answer question. However, there can be inconsistencies in teacher-performed manual evaluations due to an excessive number of students, time demands, fatigue, etc. Consequently, teachers require a trustworthy system capable of autonomously and accurately evaluating student answers. Using hybrid transfer learning and student answer dataset, we aim to create a reliable automated short answer scoring system called Hybrid Transfer Learning for Automated Short Answer Scoring (HTL-ASAS). HTL-ASAS combines multiple tokenizers from a pretrained model with the bidirectional encoder representations from transformers. Based on our evaluation of the training model, we determined that HTL-ASAS has a higher evaluation accuracy than models used in previous studies. The accuracy of HTL-ASAS for datasets containing responses to questions pertaining to introductory information technology courses reaches 99.6%. With an accuracy close to one hundred percent, the developed model can undoubtedly serve as the foundation for a trustworthy ASAS system

    All-in-One E-Book Development in Proposing Automatic Critical Thinking Skill Assessments

    Get PDF
    Many e-books have been developed to learn specific physics concepts with comprehensive features. This means that e-books not only contain the primary components such as animations, videos, and illustrations, but also many of them are equipped with virtual experiments. However, these e-books often lack integration of the assessment process, which is an important part of the learning experience. To address this, an all-in-one e-book concept called Aneboo has been developed. Aneboo includes interactive physics illustrations, virtual laboratories, worksheets, videos, and critical thinking assessments, all built into a single media platform for learning the concept of static fluids in junior high school. Additionally, Aneboo examines its function in automatically assessing critical thinking skills. The development of Aneboo follows the Hannafin & Peck development model, which includes needs assessment, design and development, implementation, and identification of similarities between manual and automatic scoring. As a result, Aneboo has achieved a validation score ranging from 95% to 97%. Moreover, Aneboo has the potential to automatically assess critical thinking skills through the similarity check feature embedded in the media

    Machine learning model for automated assessment of short subjective answers

    Get PDF
    Natural Language Processing (NLP) has recently gained significant attention; where, semantic similarity techniques are widely used in diverse applications, such as information retrieval, question-answering systems, and sentiment analysis. One promising area where NLP is being applied, is personalized learning, where assessment and adaptive tests are used to capture students' cognitive abilities. In this context, open-ended questions are commonly used in assessments due to their simplicity, but their effectiveness depends on the type of answer expected. To improve comprehension, it is essential to understand the underlying meaning of short text answers, which is challenging due to their length, lack of clarity, and structure. Researchers have proposed various approaches, including distributed semantics and vector space models, However, assessing short answers using these methods presents significant challenges, but machine learning methods, such as transformer models with multi-head attention, have emerged as advanced techniques for understanding and assessing the underlying meaning of answers. This paper proposes a transformer learning model that utilizes multi-head attention to identify and assess students' short answers to overcome these issues. Our approach improves the performance of assessing the assessments and outperforms current state-of-the-art techniques. We believe our model has the potential to revolutionize personalized learning and significantly contribute to improving student outcomes

    Learning Analytics Through Machine Learning and Natural Language Processing

    Get PDF
    The increase of computing power and the ability to log students’ data with the help of the computer-assisted learning systems has led to an increased interest in developing and applying computer science techniques for analyzing learning data. To understand and investigate how learning-generated data can be used to improve student success, data mining techniques have been applied to several educational tasks. This dissertation investigates three important tasks in various domains of educational data mining: learners’ behavior analysis, essay structure analysis and feedback providing, and learners’ dropout prediction. The first project applied latent semantic analysis and machine learning approaches to investigate how MOOC learners’ longitudinal trajectory of meaningful forum participation facilitated learner performance. The findings have implications on refining the courses’ facilitation methods and forum design, helping improve learners’ performance, and assessing learners’ academic performance in MOOCs. The second project aims to analyze the organizational structures used in previous ACT test essays and provide an argumentative structure feedback tool driven by deep learning language models to better support the current automatic essay scoring systems and classroom settings. The third project applied MOOC learners’ forum participation states to predict dropouts with the help of hidden Markov models and other machine learning techniques. The results of this project show that forum behavior can be applied to predict dropout and evaluate the learners’ status. Overall, the results of this dissertation expand current research and shed light on how computer science techniques could further improve students’ learning experience
    corecore