5 research outputs found

    Discussion Tracker: Supporting Teacher Learning about Students' Collaborative Argumentation in High School Classrooms

    Full text link
    Teaching collaborative argumentation is an advanced skill that many K-12 teachers struggle to develop. To address this, we have developed Discussion Tracker, a classroom discussion analytics system based on novel algorithms for classifying argument moves, specificity, and collaboration. Results from a classroom deployment indicate that teachers found the analytics useful, and that the underlying classifiers perform with moderate to substantial agreement with humans

    Learning Analytics Through Machine Learning and Natural Language Processing

    Get PDF
    The increase of computing power and the ability to log students’ data with the help of the computer-assisted learning systems has led to an increased interest in developing and applying computer science techniques for analyzing learning data. To understand and investigate how learning-generated data can be used to improve student success, data mining techniques have been applied to several educational tasks. This dissertation investigates three important tasks in various domains of educational data mining: learners’ behavior analysis, essay structure analysis and feedback providing, and learners’ dropout prediction. The first project applied latent semantic analysis and machine learning approaches to investigate how MOOC learners’ longitudinal trajectory of meaningful forum participation facilitated learner performance. The findings have implications on refining the courses’ facilitation methods and forum design, helping improve learners’ performance, and assessing learners’ academic performance in MOOCs. The second project aims to analyze the organizational structures used in previous ACT test essays and provide an argumentative structure feedback tool driven by deep learning language models to better support the current automatic essay scoring systems and classroom settings. The third project applied MOOC learners’ forum participation states to predict dropouts with the help of hidden Markov models and other machine learning techniques. The results of this project show that forum behavior can be applied to predict dropout and evaluate the learners’ status. Overall, the results of this dissertation expand current research and shed light on how computer science techniques could further improve students’ learning experience

    Analysis of Collaborative Argumentation in Text-based Classroom Discussions

    Get PDF
    Collaborative argumentation can be defined as the process of building evidence-based, reasoned knowledge through dialogue and it is the foundation for text-based, student-centered classroom discussions. Previous studies for analyzing classroom discussions, however, have not focused on the actual content of student talk. In this thesis, we develop a framework for analyzing student talk in multi-party, text-based classroom discussions to understand how students interact and collaboratively build arguments. The proposed framework will simultaneously consider multiple features, namely argumentation, specificity and collaboration. We additionally propose computational models to investigate three aspects: 1) automatically predicting specificity; 2) automatically predicting argument components, and investigating the importance of speaker-dependent context; 3) using multi-task learning to jointly predict all aspects of student talk and improve reliability

    Exploring Automated Essay Scoring Models for Multiple Corpora and Topical Component Extraction from Student Essays

    Get PDF
    Since it is a widely accepted notion that human essay grading is labor-intensive, automatic scoring method has drawn more attention. It reduces reliance on human effort and subjectivity over time and has commercial benefits for standardized aptitude tests. Automated essay scoring could be defined as a method for grading student essays, which is based on high inter-agreement with human grader, if they exist, and requires no human effort during the process. This research mainly focuses on improving existing Automated Essay Scoring (AES) models with different technologies. We present three different scoring models for grading two corpora: the Response to Text Assessment (RTA) and the Automated Student Assessment Prize (ASAP). First of all, a traditional machine learning model that extracts features based on semantic similarity measurement is employed for grading the RTA task. Secondly, a neural network model with the co-attention mechanism is used for grading sourced-based writing tasks. Thirdly, we propose a hybrid model integrating the neural network model with hand-crafted features. Experiments show that the feature-based model outperforms its baseline, but a stand-alone neural network model significantly outperforms the feature-based model. Additionally, a hybrid model integrating the neural network model and hand-crafted features outperforms its baselines, especially in a cross-prompt experimental setting. Besides, we present two investigations of using the intermediate output of the neural network model for keywords and key phrases extraction from student essays and the source article. Experiments show that keywords and key phrases extracted by our models support the feature-based AES model, and human effort can be relieved by using automated essay quality signals during the training process
    corecore