4 research outputs found

    Leveraging Affect Transfer Learning for Behavior Prediction in an Intelligent Tutoring System

    Full text link
    In the context of building an intelligent tutoring system (ITS), which improves student learning outcomes by intervention, we set out to improve prediction of student problem outcome. In essence, we want to predict the outcome of a student answering a problem in an ITS from a video feed by analyzing their face and gestures. For this, we present a novel transfer learning facial affect representation and a user-personalized training scheme that unlocks the potential of this representation. We model the temporal structure of video sequences of students solving math problems using a recurrent neural network architecture. Additionally, we extend the largest dataset of student interactions with an intelligent online math tutor by a factor of two. Our final model, coined ATL-BP (Affect Transfer Learning for Behavior Prediction) achieves an increase in mean F-score over state-of-the-art of 45% on this new dataset in the general case and 50% in a more challenging leave-users-out experimental setting when we use a user-personalized training scheme

    FACE READERS: The Frontier of Computer Vision and Math Learning

    Get PDF
    The future of AI-assisted individualized learning includes computer vision to inform intelligent tutors and teachers about student affect, motivation and performance. Facial expression recognition is essential in recognizing subtle differences when students ask for hints or fail to solve problems. Facial features and classification labels enable intelligent tutors to predict students’ performance and recommend activities. Videos can capture students’ faces and model their effort and progress; machine learning classifiers can support intelligent tutors to provide interventions. One goal of this research is to support deep dives by teachers to identify students’ individual needs through facial expression and to provide immediate feedback. Another goal is to develop data-directed education to gauge students’ pre-existing knowledge and analyze real-time data that will engage both teachers and students in more individualized and precision teaching and learning. This paper identifies three phases in the process of recognizing and predicting student progress based on analyzing facial features: Phase I: Collecting datasets and identifying salient labels for facial features and student attention/engagement; Phase II: Building and training deep learning models of facial features; and Phase III: Predicting student problem-solving outcome. © 2023 Copyright for this paper by its authors

    Personalized face and gesture analysis using hierarchical neural networks

    Full text link
    The video-based computational analyses of human face and gesture signals encompass a myriad of challenging research problems involving computer vision, machine learning and human computer interaction. In this thesis, we focus on the following challenges: a) the classification of hand and body gestures along with the temporal localization of their occurrence in a continuous stream, b) the recognition of facial expressivity levels in people with Parkinson's Disease using multimodal feature representations, c) the prediction of student learning outcomes in intelligent tutoring systems using affect signals, and d) the personalization of machine learning models, which can adapt to subject and group-specific nuances in facial and gestural behavior. Specifically, we first conduct a quantitative comparison of two approaches to the problem of segmenting and classifying gestures on two benchmark gesture datasets: a method that simultaneously segments and classifies gestures versus a cascaded method that performs the tasks sequentially. Second, we introduce a framework that computationally predicts an accurate score for facial expressivity and validate it on a dataset of interview videos of people with Parkinson's disease. Third, based on a unique dataset of videos of students interacting with MathSpring, an intelligent tutoring system, collected by our collaborative research team, we build models to predict learning outcomes from their facial affect signals. Finally, we propose a novel solution to a relatively unexplored area in automatic face and gesture analysis research: personalization of models to individuals and groups. We develop hierarchical Bayesian neural networks to overcome the challenges posed by group or subject-specific variations in face and gesture signals. We successfully validate our formulation on the problems of personalized subject-specific gesture classification, context-specific facial expressivity recognition and student-specific learning outcome prediction. We demonstrate the flexibility of our hierarchical framework by validating the utility of both fully connected and recurrent neural architectures

    Affect-driven learning outcomes prediction in intelligent tutoring systems

    No full text
    Equipping an Intelligent Tutoring System (ITS) with the ability to interpret affective signals from students could potentially improve the learning experience of students by enabling the tutor to monitor the students\u27 progress and provide timely interventions as well as present appropriate affective reactions via a virtual tutor. Most ITSs equipped with affect modeling capabilities attempt to predict the emotional state of users. However, the focus in this work is instead on trying to directly predict the learning outcomes of students from a stream of video capturing the students faces as they work on a set of math problems. Using facial features extracted from a video stream, we train classifiers to directly predict the success or failure of a student\u27s attempt to answer a question while the student has just begun to work on the problem. In this work, we first introduce a novel dataset of student interactions with MathSpring, a popular ITS. We provide an exploratory analysis of the different problem outcome classes using typical facial action unit activations. We develop baseline models to predict the problem outcome labels of students solving math problems and discuss how early problem outcome labels can be forecasted and utilized to provide possible interventions
    corecore