70 research outputs found

    A Dual-Modality Emotion Recognition System of EEG and Facial Images and its Application in Educational Scene

    Get PDF
    With the development of computer science, people's interactions with computers or through computers have become more frequent. Some human-computer interactions or human-to-human interactions that are often seen in daily life: online chat, online banking services, facial recognition functions, etc. Only through text messaging, however, can the effect of information transfer be reduced to around 30% of the original. Communication becomes truly efficient when we can see one other's reactions and feel each other's emotions. This issue is especially noticeable in the educational field. Offline teaching is a classic teaching style in which teachers may determine a student's present emotional state based on their expressions and alter teaching methods accordingly. With the advancement of computers and the impact of Covid-19, an increasing number of schools and educational institutions are exploring employing online or video-based instruction. In such circumstances, it is difficult for teachers to get feedback from students. Therefore, an emotion recognition method is proposed in this thesis that can be used for educational scenarios, which can help teachers quantify the emotional state of students in class and be used to guide teachers in exploring or adjusting teaching methods. Text, physiological signals, gestures, facial photographs, and other data types are commonly used for emotion recognition. Data collection for facial images emotion recognition is particularly convenient and fast among them, although there is a problem that people may subjectively conceal true emotions, resulting in inaccurate recognition results. Emotion recognition based on EEG waves can compensate for this drawback. Taking into account the aforementioned issues, this thesis first employs the SVM-PCA to classify emotions in EEG data, then employs the deep-CNN to classify the emotions of the subject's facial images. Finally, the D-S evidence theory is used for fusing and analyzing the two classification results and obtains the final emotion recognition accuracy of 92%. The specific research content of this thesis is as follows: 1) The background of emotion recognition systems used in teaching scenarios is discussed, as well as the use of various single modality systems for emotion recognition. 2) Detailed analysis of EEG emotion recognition based on SVM. The theory of EEG signal generation, frequency band characteristics, and emotional dimensions is introduced. The EEG signal is first filtered and processed with artifact removal. The processed EEG signal is then used for feature extraction using wavelet transforms. It is finally fed into the proposed SVM-PCA for emotion recognition and the accuracy is 64%. 3) Using the proposed deep-CNN to recognize emotions in facial images. Firstly, the Adaboost algorithm is used to detect and intercept the face area in the image, and the gray level balance is performed on the captured image. Then the preprocessed images are trained and tested using the deep-CNN, and the average accuracy is 88%. 4) Fusion method based on decision-making layer. The data fusion at the decision level is carried out with the results of EEG emotion recognition and facial expression emotion recognition. The final dual-modality emotion recognition results and system accuracy of 92% are obtained using D-S evidence theory. 5) The dual-modality emotion recognition system's data collection approach is designed. Based on the process, the actual data in the educational scene is collected and analyzed. The final accuracy of the dual-modality system is 82%. Teachers can use the emotion recognition results as a guide and reference to improve their teaching efficacy

    Extracting time-frequency feature of single-channel vastus medialis EMG signals for knee exercise pattern recognition

    Full text link
    © 2017 Zhang et al. This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. The EMG signal indicates the electrophysiological response to daily living of activities, particularly to lower-limb knee exercises. Literature reports have shown numerous benefits of the Wavelet analysis in EMG feature extraction for pattern recognition. However, its application to typical knee exercises when using only a single EMG channel is limited. In this study, three types of knee exercises, i.e., flexion of the leg up (standing), hip extension from a sitting position (sitting) and gait (walking) are investigated from 14 healthy untrained subjects, while EMG signals from the muscle group of vastus medialis and the goniometer on the knee joint of the detected leg are synchronously monitored and recorded. Four types of lower-limb motions including standing, sitting, stance phase of walking, and swing phase of walking, are segmented. The Wavelet Transform (WT) based Singular Value Decomposition (SVD) approach is proposed for the classification of four lower-limb motions using a single-channel EMG signal from the muscle group of vastus medialis. Based on lower-limb motions from all subjects, the combination of five-level wavelet decomposition and SVD is used to comprise the feature vector. The Support Vector Machine (SVM) is then configured to build a multiple-subject classifier for which the subject independent accuracy will be given across all subjects for the classification of four types of lower-limb motions. In order to effectively indicate the classification performance, EMG features from time-domain (e.g., Mean Absolute Value (MAV), Root-Mean-Square (RMS), integrated EMG (iEMG), Zero Crossing (ZC)) and frequency-domain (e.g., Mean Frequency (MNF) and Median Frequency (MDF)) are also used to classify lower-limb motions. The five-fold cross validation is performed and it repeats fifty times in order to acquire the robust subject independent accuracy. Results show that the proposed WT-based SVD approach has the classification accuracy of 91.85% ±0.88% which outperforms other feature models

    Accessible Integration of Physiological Adaptation in Human-Robot Interaction

    Get PDF
    Technological advancements in creating and commercializing novel unobtrusive wearable physiological sensors have generated new opportunities to develop adaptive human-robot interaction (HRI). Detecting complex human states such as engagement and stress when interacting with social agents could bring numerous advantages to creating meaningful interactive experiences. Bodily signals have classically been used for post-interaction analysis in HRI. Despite this, real-time measurements of autonomic responses have been used in other research domains to develop physiologically adaptive systems with great success; increasing user-experience, task performance, and reducing cognitive workload. This thesis presents the HRI Physio Lib, a conceptual framework, and open-source software library to facilitate the development of physiologically adaptive HRI scenarios. Both the framework and architecture of the library are described in-depth, along with descriptions of additional software tools that were developed to make the inclusion of physiological signals easier for robotics frameworks. The framework is structured around four main components for designing physiologically adaptive experimental scenarios: signal acquisition, processing and analysis; social robot and communication; and scenario and adaptation. Open-source software tools have been developed to assist in the individual creation of each described component. To showcase our framework and test the software library, we developed, as a proof-of-concept, a simple scenario revolving around a physiologically aware exercise coach, that modulates the speed and intensity of the activity to promote an effective cardiorespiratory exercise. We employed the socially assistive QT robot for our exercise scenario, as it provides a comprehensive ROS interface, making prototyping of behavioral responses fast and simple. Our exercise routine was designed following guidelines by the American College of Sports Medicine. We describe our physiologically adaptive algorithm and propose an alternative second one with stochastic elements. Finally, a discussion about other HRI domains where the addition of a physiologically adaptive mechanism could result in novel advances in interaction quality is provided as future extensions for this work. From the literature, we identified improving engagement, providing deeper social connections, health care scenarios, and also applications for self-driving vehicles as promising avenues for future research where a physiologically adaptive social robot could improve user experience

    Affective Computing

    Get PDF
    This book provides an overview of state of the art research in Affective Computing. It presents new ideas, original results and practical experiences in this increasingly important research field. The book consists of 23 chapters categorized into four sections. Since one of the most important means of human communication is facial expression, the first section of this book (Chapters 1 to 7) presents a research on synthesis and recognition of facial expressions. Given that we not only use the face but also body movements to express ourselves, in the second section (Chapters 8 to 11) we present a research on perception and generation of emotional expressions by using full-body motions. The third section of the book (Chapters 12 to 16) presents computational models on emotion, as well as findings from neuroscience research. In the last section of the book (Chapters 17 to 22) we present applications related to affective computing

    Facial Emotion Recognition Feature Extraction: A Survey

    Get PDF
    Facial emotion recognition is a process based on facial expression to automatically recognize individual emotion expression. Automatic recognition refers to creating computer systems that are able to simulate human natural ability of detection, analysis, and determination of emotion by facial expression. Human natural recognition uses various points of observation to make decision or conclusion on emotion expressed by the present person in front. Facial features efficiently extracted aid in improving the classifier performance and application efficiency. Many feature extraction methods based on shape, texture, and other local features are proposed in the literature, and this chapter will review them. This chapter will survey some recent and formal feature expression methods from video and image products and classify them according to their efficiency and application

    Expressive and response dimensions of human emotion.

    Get PDF
    This thesis is about the neural mechanisms that underpin the expression of emotion in the human face and emotional modulation of behavioural responses. I designed 5 integrated studies and used functional magnetic resonance imaging (fMRI) to address specifically the neural mechanisms underlying human facial expression and emotional response. This work complements studies of emotion perception and subjective affective experience to provide a more comprehensive understanding of human emotions. I examined the neural underpinnings of emotional facial expression in three studies. I first demonstrated that emotional (compared to non-emotional) facial expression is not a purely motoric process but engages affective centres, including amygdala and rostral cingulate gyrus. In a second study I developed the concept of emotion contagion to demonstrate and verify a new interference effect (emotion expression interference, EEI). There is a cost (in reaction time and effort) to over-riding pre-potent tendency to mirror the emotional expressions of others. Several neural centres supporting EEI were identified (inferior frontal gyrus, superior temporal sulcus and insula), with their activity across subject predicting individual differences in personal empathy and emotion regulation. In a third study I examined an interesting phenomenon in our daily social life: how our own emotional facial expressions influence our judgment of the emotional signals of other people I explored this issue experimentally to examine the behavioural and neural consequences of posing positive (smiling) and negative (frowning) emotional expressions on judgments of perceived facial expressions. Reciprocal interactions between an emotion centre (amygdala) and a social signal processing region (superior temporal sulcus) were quantified. My analysis further revealed that the biasing of emotion judgments by one's own facial expression works through changes in connectivity between posterior brain regions (specifically from superior temporal sulcus to post-central cortex). I further developed two versions of an emotion GO/NOGO task to probe the impact of affective processing on behavioural responses. GO represents response execution and NOGO represents response inhibition. I therefore investigated how different emotions modulate both these complementary response dimensions (i.e. execution and inhibition). This research line is pertinent to a major theme within emotion theory, in which emotion is defined in terms of response patterns (e.g. approach and withdrawal). My results confirmed that both emotional processing and induced emotional states have robust modulatory effects on neural centres supporting response execution and response inhibition. Importantly, my results argue for emotion as a context for response control. My work extends our understanding of human emotion in terms of the nature and effect of its expression and its influence on response system

    Logging Stress and Anxiety Using a Gamified Mobile-based EMA Application, and Emotion Recognition Using a Personalized Machine Learning Approach

    Get PDF
    According to American Psychological Association (APA) more than 9 in 10 (94 percent) adults believe that stress can contribute to the development of major health problems, such as heart disease, depression, and obesity. Due to the subjective nature of stress, and anxiety, it has been demanding to measure these psychological issues accurately by only relying on objective means. In recent years, researchers have increasingly utilized computer vision techniques and machine learning algorithms to develop scalable and accessible solutions for remote mental health monitoring via web and mobile applications. To further enhance accuracy in the field of digital health and precision diagnostics, there is a need for personalized machine-learning approaches that focus on recognizing mental states based on individual characteristics, rather than relying solely on general-purpose solutions. This thesis focuses on conducting experiments aimed at recognizing and assessing levels of stress and anxiety in participants. In the initial phase of the study, a mobile application with broad applicability (compatible with both Android and iPhone platforms) is introduced (we called it STAND). This application serves the purpose of Ecological Momentary Assessment (EMA). Participants receive daily notifications through this smartphone-based app, which redirects them to a screen consisting of three components. These components include a question that prompts participants to indicate their current levels of stress and anxiety, a rating scale ranging from 1 to 10 for quantifying their response, and the ability to capture a selfie. The responses to the stress and anxiety questions, along with the corresponding selfie photographs, are then analyzed on an individual basis. This analysis focuses on exploring the relationships between self-reported stress and anxiety levels and potential facial expressions indicative of stress and anxiety, eye features such as pupil size variation and eye closure, and specific action units (AUs) observed in the frames over time. In addition to its primary functions, the mobile app also gathers sensor data, including accelerometer and gyroscope readings, on a daily basis. This data holds potential for further analysis related to stress and anxiety. Furthermore, apart from capturing selfie photographs, participants have the option to upload video recordings of themselves while engaging in two neuropsychological games. These recorded videos are then subjected to analysis in order to extract pertinent features that can be utilized for binary classification of stress and anxiety (i.e., stress and anxiety recognition). The participants that will be selected for this phase are students aged between 18 and 38, who have received recent clinical diagnoses indicating specific stress and anxiety levels. In order to enhance user engagement in the intervention, gamified elements - an emerging trend to influence user behavior and lifestyle - has been utilized. Incorporating gamified elements into non-game contexts (e.g., health-related) has gained overwhelming popularity during the last few years which has made the interventions more delightful, engaging, and motivating. In the subsequent phase of this research, we conducted an AI experiment employing a personalized machine learning approach to perform emotion recognition on an established dataset called Emognition. This experiment served as a simulation of the future analysis that will be conducted as part of a more comprehensive study focusing on stress and anxiety recognition. The outcomes of the emotion recognition experiment in this study highlight the effectiveness of personalized machine learning techniques and bear significance for the development of future diagnostic endeavors. For training purposes, we selected three models, namely KNN, Random Forest, and MLP. The preliminary performance accuracy results for the experiment were 93%, 95%, and 87% respectively for these models

    Intelligent Sensors for Human Motion Analysis

    Get PDF
    The book, "Intelligent Sensors for Human Motion Analysis," contains 17 articles published in the Special Issue of the Sensors journal. These articles deal with many aspects related to the analysis of human movement. New techniques and methods for pose estimation, gait recognition, and fall detection have been proposed and verified. Some of them will trigger further research, and some may become the backbone of commercial systems
    corecore