3,117 research outputs found

    Driver frustration detection from audio and video in the wild

    Get PDF
    We present a method for detecting driver frustration from both video and audio streams captured during the driver's interaction with an in-vehicle voice-based navigation system. The video is of the driver's face when the machine is speaking, and the audio is of the driver's voice when he or she is speaking. We analyze a dataset of 20 drivers that contains 596 audio epochs (audio clips, with duration from 1 sec to 15 sec) and 615 video epochs (video clips, with duration from 1 sec to 45 sec). The dataset is balanced across 2 age groups, 2 vehicle systems, and both genders. The model was subject-independently trained and tested using 4-fold cross-validation. We achieve an accuracy of 77.4% for detecting frustration from a single audio epoch and 81.2% for detecting frustration from a single video epoch. We then treat the video and audio epochs as a sequence of interactions and use decision fusion to characterize the trade-off between decision time and classification accuracy, which improved the prediction accuracy to 88.5% after 9 epochs

    A Multimodal Approach for Monitoring Driving Behavior and Emotions

    Get PDF
    Studies have indicated that emotions can significantly be influenced by environmental factors; these factors can also significantly influence drivers’ emotional state and, accordingly, their driving behavior. Furthermore, as the demand for autonomous vehicles is expected to significantly increase within the next decade, a proper understanding of drivers’/passengers’ emotions, behavior, and preferences will be needed in order to create an acceptable level of trust with humans. This paper proposes a novel semi-automated approach for understanding the effect of environmental factors on drivers’ emotions and behavioral changes through a naturalistic driving study. This setup includes a frontal road and facial camera, a smart watch for tracking physiological measurements, and a Controller Area Network (CAN) serial data logger. The results suggest that the driver’s affect is highly influenced by the type of road and the weather conditions, which have the potential to change driving behaviors. For instance, when the research defines emotional metrics as valence and engagement, results reveal there exist significant differences between human emotion in different weather conditions and road types. Participants’ engagement was higher in rainy and clear weather compared to cloudy weather. More-over, engagement was higher on city streets and highways compared to one-lane roads and two-lane highways

    Automatically Detecting Confusion and Conflict During Collaborative Learning Using Linguistic, Prosodic, and Facial Cues

    Full text link
    During collaborative learning, confusion and conflict emerge naturally. However, persistent confusion or conflict have the potential to generate frustration and significantly impede learners' performance. Early automatic detection of confusion and conflict would allow us to support early interventions which can in turn improve students' experience with and outcomes from collaborative learning. Despite the extensive studies modeling confusion during solo learning, there is a need for further work in collaborative learning. This paper presents a multimodal machine-learning framework that automatically detects confusion and conflict during collaborative learning. We used data from 38 elementary school learners who collaborated on a series of programming tasks in classrooms. We trained deep multimodal learning models to detect confusion and conflict using features that were automatically extracted from learners' collaborative dialogues, including (1) language-derived features including TF-IDF, lexical semantics, and sentiment, (2) audio-derived features including acoustic-prosodic features, and (3) video-derived features including eye gaze, head pose, and facial expressions. Our results show that multimodal models that combine semantics, pitch, and facial expressions detected confusion and conflict with the highest accuracy, outperforming all unimodal models. We also found that prosodic cues are more predictive of conflict, and facial cues are more predictive of confusion. This study contributes to the automated modeling of collaborative learning processes and the development of real-time adaptive support to enhance learners' collaborative learning experience in classroom contexts.Comment: 27 pages, 7 figures, 7 table

    Confirmation Report: Modelling Interlocutor Confusion in Situated Human Robot Interaction

    Get PDF
    Human-Robot Interaction (HRI) is an important but challenging field focused on improving the interaction between humans and robots such to make the interaction more intelligent and effective. However, building a natural conversational HRI is an interdisciplinary challenge for scholars, engineers, and designers. It is generally assumed that the pinnacle of human- robot interaction will be having fluid naturalistic conversational interaction that in important ways mimics that of how humans interact with each other. This of course is challenging at a number of levels, and in particular there are considerable difficulties when it comes to naturally monitoring and responding to the user’s mental state. On the topic of mental states, one field that has received little attention to date is moni- toring the user for possible confusion states. Confusion is a non-trivial mental state which can be seen as having at least two substates. There two confusion states can be thought of as being associated with either negative or positive emotions. In the former, when people are productively confused, they have a passion to solve any current difficulties. Meanwhile, people who are in unproductive confusion may lose their engagement and motivation to overcome those difficulties, which in turn may even lead them to drop the current conversation. While there has been some research on confusion monitoring and detection, it has been limited with the most focused on evaluating confusion states in online learning tasks. The central hypothesis of this research is that the monitoring and detection of confusion states in users is essential to fluid task-centric HRI and that it should be possible to detect such confusion and adjust policies to mitigate the confusion in users. In this report, I expand on this hypothesis and set out several research questions. I also provide a comprehensive literature review before outlining work done to date towards my research hypothesis, I also set out plans for future experimental work

    Frustration recognition from speech during game interaction using wide residual networks

    Get PDF
    ABSTRACT Background Although frustration is a common emotional reaction during playing games, an excessive level of frustration can harm users’ experiences, discouraging them from undertaking further game interactions. The automatic detection of players’ frustration enables the development of adaptive systems, which through a real-time difficulty adjustment, would adapt the game to the user’s specific needs; thus, maximising players experience and guaranteeing the game success. To this end, we present our speech-based approach for the automatic detection of frustration during game interactions, a specific task still under-explored in research. Method The experiments were performed on the Multimodal Game Frustration Database (MGFD), an audiovisual dataset—collected within the Wizard-of-Oz framework—specially tailored to investigate verbal and facial expressions of frustration during game interactions. We explored the performance of a variety of acoustic feature sets, including Mel-Spectrograms and Mel-Frequency Cepstral Coefficients (MFCCs), as well as the low dimensional knowledge-based acoustic feature set eGeMAPS. Due to the always increasing improvements achieved by the use of Convolutional Neural Networks (CNNs) in speech recognition tasks, unlike the MGFD baseline—based on Long Short-Term Memory (LSTM) architecture and Support Vector Machine (SVM) classifier—in the present work we take into consideration typically used CNNs, including ResNets, VGG, and AlexNet. Furthermore, given the still open debate on the shallow vs deep networks suitability, we also examine the performance of two of the latest deep CNNs, i. e., WideResNets and EfficientNet. Results Our best result, achieved with WideResNets and Mel-Spectrogram features, increases the system performance from 58.8 % Unweighted Average Recall (UAR) to 93.1 % UAR for speech-based automatic frustration recognition
    • …
    corecore