1,044 research outputs found

    Affect-driven Engagement Measurement from Videos

    Full text link
    In education and intervention programs, person's engagement has been identified as a major factor in successful program completion. Automatic measurement of person's engagement provides useful information for instructors to meet program objectives and individualize program delivery. In this paper, we present a novel approach for video-based engagement measurement in virtual learning programs. We propose to use affect states, continuous values of valence and arousal extracted from consecutive video frames, along with a new latent affective feature vector and behavioral features for engagement measurement. Deep learning-based temporal, and traditional machine-learning-based non-temporal models are trained and validated on frame-level, and video-level features, respectively. In addition to the conventional centralized learning, we also implement the proposed method in a decentralized federated learning setting and study the effect of model personalization in engagement measurement. We evaluated the performance of the proposed method on the only two publicly available video engagement measurement datasets, DAiSEE and EmotiW, containing videos of students in online learning programs. Our experiments show a state-of-the-art engagement level classification accuracy of 63.3% and correctly classifying disengagement videos in the DAiSEE dataset and a regression mean squared error of 0.0673 on the EmotiW dataset. Our ablation study shows the effectiveness of incorporating affect states in engagement measurement. We interpret the findings from the experimental results based on psychology concepts in the field of engagement.Comment: 13 pages, 8 figures, 7 table

    A conceptual framework for an affective tutoring system using unobtrusive affect sensing for enhanced tutoring outcomes

    Get PDF
    PhD ThesisAffect plays a pivotal role in influencing the student’s motivation and learning achievements. The ability of expert human tutors to achieve enhanced learning outcomes is widely attributed to their ability to sense the affect of their tutees and to continually adapt their tutoring strategies in response to the dynamically changing affect throughout the tutoring session. In this thesis, I explore the feasibility of building an Affective Tutoring System (ATS) which senses the student’s affect on a moment-to-moment basis with the use of unobtrusive sensors in the context of computer programming tutoring. The novel use of keystrokes and mouse clicks for affect sensing is proposed here as they are ubiquitous and unobtrusive. I first establish the viability of using keystrokes and contextual logs for affect sensing first on a per exercise session level and then on a more granular basis of 30 seconds. Subsequently, I move on to investigate the use of multiple sensing channels e.g. facial, keystrokes, mouse clicks, contextual logs and head postures to enhance the availability and accuracy of sensing. The results indicated that it is viable to use keystrokes for affect sensing. In addition, the combination of multiple sensor modes enhances the accuracy of affect sensing. From the results, the sensor modes that are most significant for affect sensing are the head postures and facial modes. Nevertheless, keystrokes make up for the periods of unavailability of the former. With the affect sensing (both sensing of frustration and disengagement) in place, I moved on to architect and design the ATS and conducted an experimental study and a series of focus group discussions to evaluate the ATS. The results showed that the ATS is rated positively by the participants for usability and acceptance. The ATS is also effective in enhancing the learning of the studentsNanyang Polytechni

    Handling Missing Data For Sleep Monitoring Systems

    Get PDF
    Sensor-based sleep monitoring systems can be used to track sleep behavior on a daily basis and provide feedback to their users to promote health and well-being. Such systems can provide data visualizations to enable self-reflection on sleep habits or a sleep coaching service to improve sleep quality. To provide useful feedback, sleep monitoring systems must be able to recognize whether an individual is sleeping or awake. Existing approaches to infer sleep-wake phases, however, typically assume continuous streams of data to be available at inference time. In real-world settings, though, data streams or data samples may be missing, causing severe performance degradation of models trained on complete data streams. In this paper, we investigate the impact of missing data to recognize sleep and wake, and use regression-and interpolation-based imputation strategies to mitigate the errors that might be caused by incomplete data. To evaluate our approach, we use a data set that includes physiological traces-collected using wristbands-, behavioral data-gathered using smartphones-and self-reports from 16 participants over 30 days. Our results show that the presence of missing sensor data degrades the balanced accuracy of the classifier on average by 10-35 percentage points for detecting sleep and wake depending on the missing data rate. The impu-tation strategies explored in this work increase the performance of the classifier by 4-30 percentage points. These results open up new opportunities to improve the robustness of sleep monitoring systems against missing data

    Approaches, applications, and challenges in physiological emotion recognition — a tutorial overview

    Get PDF
    An automatic emotion recognition system can serve as a fundamental framework for various applications in daily life from monitoring emotional well-being to improving the quality of life through better emotion regulation. Understanding the process of emotion manifestation becomes crucial for building emotion recognition systems. An emotional experience results in changes not only in interpersonal behavior but also in physiological responses. Physiological signals are one of the most reliable means for recognizing emotions since individuals cannot consciously manipulate them for a long duration. These signals can be captured by medical-grade wearable devices, as well as commercial smart watches and smart bands. With the shift in research direction from laboratory to unrestricted daily life, commercial devices have been employed ubiquitously. However, this shift has introduced several challenges, such as low data quality, dependency on subjective self-reports, unlimited movement-related changes, and artifacts in physiological signals. This tutorial provides an overview of practical aspects of emotion recognition, such as experiment design, properties of different physiological modalities, existing datasets, suitable machine learning algorithms for physiological data, and several applications. It aims to provide the necessary psychological and physiological backgrounds through various emotion theories and the physiological manifestation of emotions, thereby laying a foundation for emotion recognition. Finally, the tutorial discusses open research directions and possible solutions

    A usability study of physiological measurement in school using wearable sensors

    Get PDF
    Measuring psychophysiological signals of adolescents using unobtrusive wearable sensors may contribute to understanding the development of emotional disorders. This study investigated the feasibility of measuring high quality physiological data and examined the validity of signal processing in a school setting. Among 86 adolescents, a total of more than 410 h of electrodermal activity (EDA) data were recorded using a wrist-worn sensor with gelled electrodes and over 370 h of heart rate data were recorded using a chest-strap sensor. The results support the feasibility of monitoring physiological signals at school. We describe specific challenges and provide recommendations for signal analysis, including dealing with invalid signals due to loose sensors, and quantization noise that can be caused by limitations in analog-to-digital conversion in wearable devices and be mistaken as physiological responses. Importantly, our results show that using toolboxes for automatic signal preprocessing, decomposition, and artifact detection with default parameters while neglecting differences between devices and measurement contexts yield misleading results. Time courses of students' physiological signals throughout the course of a class were found to be clearer after applying our proposed preprocessing steps

    Recognizing Multidimensional Engagement of E-learners Based on Multi-channel Data in E-learning Environment

    Get PDF
    Despite recent advances in MOOC, the current e-learning systems have advantages of alleviating barriers by time differences, and geographically spatial separation between teachers and students. However, there has been a 'lack of supervision' problem that e-learner's learning unit state(LUS) can't be supervised automatically. In this paper, we present a fusion framework considering three channel data sources: 1) videos/images from a camera, 2) eye movement information tracked by a low solution eye tracker and 3) mouse movement. Based on these data modalities, we propose a novel approach of multi-channel data fusion to explore the learning unit state recognition. We also propose a method to build a learning state recognition model to avoid manually labeling image data. The experiments were carried on our designed online learning prototype system, and we choose CART, Random Forest and GBDT regression model to predict e-learner's learning state. The results show that multi-channel data fusion model have a better recognition performance in comparison with single channel model. In addition, a best recognition performance can be reached when image, eye movement and mouse movement features are fused.Comment: 4 pages, 4 figures, 2 table

    Logging Stress and Anxiety Using a Gamified Mobile-based EMA Application, and Emotion Recognition Using a Personalized Machine Learning Approach

    Get PDF
    According to American Psychological Association (APA) more than 9 in 10 (94 percent) adults believe that stress can contribute to the development of major health problems, such as heart disease, depression, and obesity. Due to the subjective nature of stress, and anxiety, it has been demanding to measure these psychological issues accurately by only relying on objective means. In recent years, researchers have increasingly utilized computer vision techniques and machine learning algorithms to develop scalable and accessible solutions for remote mental health monitoring via web and mobile applications. To further enhance accuracy in the field of digital health and precision diagnostics, there is a need for personalized machine-learning approaches that focus on recognizing mental states based on individual characteristics, rather than relying solely on general-purpose solutions. This thesis focuses on conducting experiments aimed at recognizing and assessing levels of stress and anxiety in participants. In the initial phase of the study, a mobile application with broad applicability (compatible with both Android and iPhone platforms) is introduced (we called it STAND). This application serves the purpose of Ecological Momentary Assessment (EMA). Participants receive daily notifications through this smartphone-based app, which redirects them to a screen consisting of three components. These components include a question that prompts participants to indicate their current levels of stress and anxiety, a rating scale ranging from 1 to 10 for quantifying their response, and the ability to capture a selfie. The responses to the stress and anxiety questions, along with the corresponding selfie photographs, are then analyzed on an individual basis. This analysis focuses on exploring the relationships between self-reported stress and anxiety levels and potential facial expressions indicative of stress and anxiety, eye features such as pupil size variation and eye closure, and specific action units (AUs) observed in the frames over time. In addition to its primary functions, the mobile app also gathers sensor data, including accelerometer and gyroscope readings, on a daily basis. This data holds potential for further analysis related to stress and anxiety. Furthermore, apart from capturing selfie photographs, participants have the option to upload video recordings of themselves while engaging in two neuropsychological games. These recorded videos are then subjected to analysis in order to extract pertinent features that can be utilized for binary classification of stress and anxiety (i.e., stress and anxiety recognition). The participants that will be selected for this phase are students aged between 18 and 38, who have received recent clinical diagnoses indicating specific stress and anxiety levels. In order to enhance user engagement in the intervention, gamified elements - an emerging trend to influence user behavior and lifestyle - has been utilized. Incorporating gamified elements into non-game contexts (e.g., health-related) has gained overwhelming popularity during the last few years which has made the interventions more delightful, engaging, and motivating. In the subsequent phase of this research, we conducted an AI experiment employing a personalized machine learning approach to perform emotion recognition on an established dataset called Emognition. This experiment served as a simulation of the future analysis that will be conducted as part of a more comprehensive study focusing on stress and anxiety recognition. The outcomes of the emotion recognition experiment in this study highlight the effectiveness of personalized machine learning techniques and bear significance for the development of future diagnostic endeavors. For training purposes, we selected three models, namely KNN, Random Forest, and MLP. The preliminary performance accuracy results for the experiment were 93%, 95%, and 87% respectively for these models

    Psychophysiology in games

    Get PDF
    Psychophysiology is the study of the relationship between psychology and its physiological manifestations. That relationship is of particular importance for both game design and ultimately gameplaying. Players’ psychophysiology offers a gateway towards a better understanding of playing behavior and experience. That knowledge can, in turn, be beneficial for the player as it allows designers to make better games for them; either explicitly by altering the game during play or implicitly during the game design process. This chapter argues for the importance of physiology for the investigation of player affect in games, reviews the current state of the art in sensor technology and outlines the key phases for the application of psychophysiology in games.The work is supported, in part, by the EU-funded FP7 ICT iLearnRWproject (project no: 318803).peer-reviewe
    • …
    corecore