1,993 research outputs found

    Prevention of Off-Task Gaming Behavior in Intelligent Tutoring Systems

    Get PDF
    A major issue in Intelligent Tutoring Systems is off-task student behavior, especially performance-based gaming, where students systematically exploit tutor behavior in order to advance through a curriculum quickly and easily, with as little active thought directed at the educational content as possible. The goal of this research was to develop a passive visual indicator to deter and prevent off-task gaming behavior without active intervention, via graphical feedback to the student and teachers. Traditional active intervention approaches were also constructed for comparison purposes. Our passive graphical intervention has been well received by teachers, and results suggest that this technique is effective at reducing off-task gaming behavior

    Measuring Student Engagement in an Intelligent Tutoring System

    Get PDF
    Detection and prevention of off-task student behavior in an Intelligent Tutoring System (ITS) has gained a significant amount of attention in recent years. Previous work in these areas have shown some success and improvement. However, the research has largely ignored the incorporation of the expert on student behavior in the classroom: the teacher. Our research re-evaluates the subjects of off-task behavior detection and prevention by developing metrics for student engagement in an ITS using teacher observations of student behavior in the classroom. We present an exploratory analysis of such metrics and the data gathered from the teachers. For off-task prevention we developed a visual reporting tool that displays a representation of a student\u27s activity in an ITS as they progress and gives a valuable immediate report for the instructor

    Visual Feedback for Gaming Prevention in Intelligent Tutoring Systems

    Get PDF
    A major issue in Intelligent Tutoring Systems is off-task student behavior, especially performance-based gaming, where students systematically exploit tutor behavior in order to advance through a curriculum quickly and easily, with as little active thought directed at the educational content as possible. The goal of this research was to explore the phenomena of off-task gaming behavior within the Assistments system, as well as to develop a passive visual indicator to deter and prevent off-task gaming behavior without active intervention via graphical feedback to the student and teachers. Traditional active intervention approaches were also constructed for comparison purposes, and machine-learned gaming-detection models were developed as a potential invocation and evaluation mechanism. Passive graphical interventions have been well received by teachers, and results are suggestive that they are effective at reducing off-task gaming behavior

    Developing a Cognitive Rule-Based Tutor for the ASSISTment System

    Get PDF
    The ASSISTment system is a web-based tutor that is currently being used as an eighth and tenth-grade mathematics in both Massachusetts and Pennsylvania. This system represents its tutors as state-based pseudo-tutors which mimic a more complex cognitive tutor based on a set of production rules. It has been shown that building pseudo-tutors significantly decreases the time spent authoring content. This is an advantage for authoring systems such as the ASSITment builder, though it sacrifices greater expressive power and flexibility. A cognitive tutor models a student\u27s behavior with general logical rules. Through model-tracing of a cognitive tutor\u27s rule space, a system can find the reasons behind a student action and give better tutoring. In addition, these cognitive rules are general and can be used for many different tutors. It is the goal of this thesis to provide the architecture for using cognitive rule-based tutors in the ASSITment system. A final requirement is that running these computationally intensive model-tracing tutors do not slow down students using the pseudo-tutors, which represents the majority of ASSISTment usage. This can be achieved with remote computation, realized with SOAP web services. The system was further extended to allow the creation and implementation of user-level experiments within the system. These experiments allow the testing of pedagogical choices. We implemented a hint dissuasion experiment to test this experimental framework and provide those results

    Educational Software for Off-Task Behavior

    Get PDF
    Off-task behavior is a problem currently facing intelligent tutoring systems as well as traditional classrooms. There are a number of reasons why students go off-task, and likewise, a number of ways for them to do so. The goals of this project were to (1) develop a potential off-task intervention method, and (2) implement an off-task detector in an existing intelligent tutoring program, which was already capable of detecting and responding to students who were gaming the system

    When Does Disengagement Correlate with Performance in Spoken Dialog Computer Tutoring?

    Get PDF
    In this paper we investigate how student disengagement relates to two performance metrics in a spoken dialog computer tutoring corpus, both when disengagement is measured through manual annotation by a trained human judge, and also when disengagement is measured through automatic annotation by the system based on a machine learning model. First, we investigate whether manually labeled overall disengagement and six different disengagement types are predictive of learning and user satisfaction in the corpus. Our results show that although students’ percentage of overall disengaged turns negatively correlates both with the amount they learn and their user satisfaction, the individual types of disengagement correlate differently: some negatively correlate with learning and user satisfaction, while others don’t correlate with eithermetric at all. Moreover, these relationships change somewhat depending on student prerequisite knowledge level. Furthermore, using multiple disengagement types to predict learning improves predictive power. Overall, these manual label-based results suggest that although adapting to disengagement should improve both student learning and user satisfaction in computer tutoring, maximizing performance requires the system to detect and respond differently based on disengagement type. Next, we present an approach to automatically detecting and responding to user disengagement types based on their differing correlations with correctness. Investigation of ourmachine learningmodel of user disengagement shows that its automatic labels negatively correlate with both performance metrics in the same way as the manual labels. The similarity of the correlations across the manual and automatic labels suggests that the automatic labels are a reasonable substitute for the manual labels. Moreover, the significant negative correlations themselves suggest that redesigning ITSPOKE to automatically detect and respond to disengagement has the potential to remediate disengagement and thereby improve performance, even in the presence of noise introduced by the automatic detection process

    Log file analysis for disengagement detection in e-Learning environments

    Get PDF

    Trying to Reduce Gaming Behavior by Students in Intelligent Tutoring Systems

    Get PDF
    Student gaming behavior in intelligent tutoring systems (ITS) has been correlated with lower learning rates. The goal of this work is to identify such behavior, produce interventions to discourage this behavior, and by doing so hopefully improve the learning rate of students who would normally display gaming behavior. Detectors have been built to identify gaming behavior. Interventions have been designed to discourage the behavior and their evaluation is discussed

    Characterizing Productive Perseverance Using Sensor-Free Detectors of Student Knowledge, Behavior, and Affect

    Get PDF
    Failure is a necessary step in the process of learning. For this reason, there has been a myriad of research dedicated to the study of student perseverance in the presence of failure, leading to several commonly-cited theories and frameworks to characterize productive and unproductive representations of the construct of persistence. While researchers are in agreement that it is important for students to persist when struggling to learn new material, there can be both positive and negative aspects of persistence. What is it, then, that separates productive from unproductive persistence? The purpose of this work is to address this question through the development, extension, and study of data-driven models of student affect, behavior, and knowledge. The increased adoption of computer-based learning platforms in real classrooms has led to unique opportunities to study student learning at both fine levels of granularity and longitudinally at scale. Prior work has leveraged machine learning methods, existing learning theory, and previous education research to explore various aspects of student learning. These include the development of sensor-free detectors that utilize only the student interaction data collected through such learning platforms. Building off of the considerable amount of prior research, this work employs state-of-the-art machine learning methods in conjunction with the large scale granular data collected by computer-based learning platforms in alignment with three goals. First, this work focuses on the development of student models that study learning through the use of advancements in student modeling and deep learning methodologies. Second, this dissertation explores the development of tools that incorporate such models to support teachers in taking action in real classrooms to promote productive approaches to learning. Finally, this work aims to complete the loop in utilizing these detector models to better understand the underlying constructs that are being measured through their application and their connection to productive perseverance and commonly-observed learning outcomes

    Understanding and Supporting Vocabulary Learners via Machine Learning on Behavioral and Linguistic Data

    Full text link
    This dissertation presents various machine learning applications for predicting different cognitive states of students while they are using a vocabulary tutoring system, DSCoVAR. We conduct four studies, each of which includes a comprehensive analysis of behavioral and linguistic data and provides data-driven evidence for designing personalized features for the system. The first study presents how behavioral and linguistic interactions from the vocabulary tutoring system can be used to predict students' off-task states. The study identifies which predictive features from interaction signals are more important and examines different types of off-task behaviors. The second study investigates how to automatically evaluate students' partial word knowledge from open-ended responses to definition questions. We present a technique that augments modern word-embedding techniques with a classic semantic differential scaling method from cognitive psychology. We then use this interpretable semantic scale method for predicting students' short- and long-term learning. The third and fourth studies show how to develop a model that can generate more efficient training curricula for both human and machine vocabulary learners. The third study illustrates a deep-learning model to score sentences for a contextual vocabulary learning curriculum. We use pre-trained language models, such as ELMo or BERT, and an additional attention layer to capture how the context words are less or more important with respect to the meaning of the target word. The fourth study examines how the contextual informativeness model, originally designed to develop curricula for human vocabulary learning, can also be used for developing curricula for various word embedding models. We identify sentences predicted as low informative for human learners are also less helpful for machine learning algorithms. Having a rich understanding of user behaviors, responses, and learning stimuli is imperative to develop an intelligent online system. Our studies demonstrate interpretable methods with cross-disciplinary approaches to understand various cognitive states of students during learning. The analysis results provide data-driven evidence for designing personalized features that can maximize learning outcomes. Datasets we collected from the studies will be shared publicly to promote future studies related to online tutoring systems. And these findings can also be applied to represent different user states observed in other online systems. In the future, we believe our findings can help to implement a more personalized vocabulary learning system, to develop a system that uses non-English texts or different types of inputs, and to investigate how the machine learning outputs interact with students.PHDInformationUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/162999/1/sjnam_1.pd
    • 

    corecore