44,710 research outputs found

    Multi-scale Entropy and Multiclass Fisher’s Linear Discriminant for Emotion Recognition Based on Multimodal Signal

    Get PDF
    Emotion recognition using physiological signals has been a special topic frequently discussed by researchers and practitioners in the past decade. However, the use of SpO2 and Pulse rate signals for emotion recognitionisvery limited and the results still showed low accuracy. It is due to the low complexity of SpO2 and Pulse rate signals characteristics. Therefore, this study proposes a Multiscale Entropy and Multiclass Fisher’s Linear Discriminant Analysis for feature extraction and dimensional reduction of these physiological signals for improving emotion recognition accuracy in elders.  In this study, the dimensional reduction process was grouped into three experimental schemes, namely a dimensional reduction using only SpO2 signals, pulse rate signals, and multimodal signals (a combination feature vectors of SpO2 and Pulse rate signals). The three schemes were then classified into three emotion classes (happy, sad, and angry emotions) using Support Vector Machine and Linear Discriminant Analysis Methods. The results showed that Support Vector Machine with the third scheme achieved optimal performance with an accuracy score of 95.24%. This result showed a significant increase of more than 22%from the previous works

    Training Pattern Classifiers with Physiological Cepstral Features to Recognise Human Emotion

    Get PDF
    The choice of a suitable set of features based on physiological signals to be utilised in enhancing the recognition of human emotion remains a burning issue in affective computing research. In this study, using the MAHNOB-HCI corpus, we extracted cepstral features from the physiological signals of galvanic skin response, electrocardiogram, electroencephalogram, skin temperature and respiration amplitude to train two state of the art pattern classifiers to recognise seven classes of human emotions. The important task of emotion recognition is largely considered a classification problem and on this basis, we carried out experiments in which the extracted physiological cepstral features were transmitted to Gaussian Radial Basis Function (RBF) neural network and Support Vector Machines (SVM) pattern classifiers for human emotion recognition. The RBF neural network pattern classifier gave the recognition accuracy of 99.5 %, while the SVM pattern classifier posted 75.0 % recognition accuracy. These results indicate the suitability of using cepstral features extracted from fused modality physiological signals with the Gaussian RBF neural network pattern classifier for efficient recognition of human emotion in affective computing system

    An EEG-Based Multi-Modal Emotion Database With Both Posed And Authentic Facial Actions For Emotion Analysis

    Get PDF
    Emotion is an experience associated with a particular pattern of physiological activity along with different physiological, behavioral and cognitive changes. One behavioral change is facial expression, which has been studied extensively over the past few decades. Facial behavior varies with a person\u27s emotion according to differences in terms of culture, personality, age, context, and environment. In recent years, physiological activities have been used to study emotional responses. A typical signal is the electroencephalogram (EEG), which measures brain activity. Most of existing EEG-based emotion analysis has overlooked the role of facial expression changes. There exits little research on the relationship between facial behavior and brain signals due to the lack of dataset measuring both EEG and facial action signals simultaneously. To address this problem, we propose to develop a new database by collecting facial expressions, action units, and EEGs simultaneously. We recorded the EEGs and face videos of both posed facial actions and spontaneous expressions from 29 participants with different ages, genders, ethnic backgrounds. Differing from existing approaches, we designed a protocol to capture the EEG signals by evoking participants\u27 individual action units explicitly. We also investigated the relation between the EEG signals and facial action units. As a baseline, the database has been evaluated through the experiments on both posed and spontaneous emotion recognition with images alone, EEG alone, and EEG fused with images, respectively. The database will be released to the research community to advance the state of the art for automatic emotion recognition

    Investigating the use of pretrained convolutional neural network on cross-subject and cross-dataset EEG emotion recognition

    Get PDF
    The electroencephalogram (EEG) has great attraction in emotion recognition studies due to its resistance to deceptive actions of humans. This is one of the most significant advantages of brain signals in comparison to visual or speech signals in the emotion recognition context. A major challenge in EEG-based emotion recognition is that EEG recordings exhibit varying distributions for different people as well as for the same person at different time instances. This nonstationary nature of EEG limits the accuracy of it when subject independency is the priority. The aim of this study is to increase the subject-independent recognition accuracy by exploiting pretrained state-of-the-art Convolutional Neural Network (CNN) architectures. Unlike similar studies that extract spectral band power features from the EEG readings, raw EEG data is used in our study after applying windowing, pre-adjustments and normalization. Removing manual feature extraction from the training system overcomes the risk of eliminating hidden features in the raw data and helps leverage the deep neural network’s power in uncovering unknown features. To improve the classification accuracy further, a median filter is used to eliminate the false detections along a prediction interval of emotions. This method yields a mean cross-subject accuracy of 86.56% and 78.34% on the Shanghai Jiao Tong University Emotion EEG Dataset (SEED) for two and three emotion classes, respectively. It also yields a mean cross-subject accuracy of 72.81% on the Database for Emotion Analysis using Physiological Signals (DEAP) and 81.8% on the Loughborough University Multimodal Emotion Dataset (LUMED) for two emotion classes. Furthermore, the recognition model that has been trained using the SEED dataset was tested with the DEAP dataset, which yields a mean prediction accuracy of 58.1% across all subjects and emotion classes. Results show that in terms of classification accuracy, the proposed approach is superior to, or on par with, the reference subject-independent EEG emotion recognition studies identified in literature and has limited complexity due to the elimination of the need for feature extraction.<br

    Recognition of Human Emotion using Radial Basis Function Neural Networks with Inverse Fisher Transformed Physiological Signals

    Get PDF
    Emotion is a complex state of human mind influenced by body physiological changes and interdependent external events thus making an automatic recognition of emotional state a challenging task. A number of recognition methods have been applied in recent years to recognize human emotion. The motivation for this study is therefore to discover a combination of emotion features and recognition method that will produce the best result in building an efficient emotion recognizer in an affective system. We introduced a shifted tanh normalization scheme to realize the inverse Fisher transformation applied to the DEAP physiological dataset and consequently performed series of experiments using the Radial Basis Function Artificial Neural Networks (RBFANN). In our experiments, we have compared the performances of digital image based feature extraction techniques such as the Histogram of Oriented Gradient (HOG), Local Binary Pattern (LBP) and the Histogram of Images (HIM). These feature extraction techniques were utilized to extract discriminatory features from the multimodal DEAP dataset of physiological signals. Experimental results obtained indicate that the best recognition accuracy was achieved with the EEG modality data using the HIM features extraction technique and classification done along the dominance emotion dimension. The result is very remarkable when compared with existing results in the literature including deep learning studies that have utilized the DEAP corpus and also applicable to diverse fields of engineering studies

    Fine-Grained Emotion Recognition Using Brain-Heart Interplay Measurements and eXplainable Convolutional Neural Networks

    Get PDF
    Emotion recognition from electro-physiological signals is an important research topic in multiple scientific domains. While a multimodal input may lead to additional information that increases emotion recognition performance, an optimal processing pipeline for such a vectorial input is yet undefined. Moreover, the algorithm performance often compromises between the ability to generalize over an emotional dimension and the explainability associated with its recognition accuracy. This study proposes a novel explainable artificial intelligence architecture for a 9-level valence recognition from electroencephalographic (EEG) and electrocardiographic (ECG) signals. Synchronous EEG-ECG information are combined to derive vectorial brain-heart interplay features, which are rearranged in a sparse matrix (image) and then classified through an explainable convolutional neural network. The proposed architecture is tested on the publicly available MAHNOB dataset also against the use of vectorial EEG input. Results, also expressed in terms of confusion matrices, outperform the current state of the art, especially in terms of recognition accuracy. In conclusion, we demonstrate the effectiveness of the proposed approach embedding multimodal brain-heart dynamics in an explainable fashion

    Emotion Recognition from Electroencephalogram Signals based on Deep Neural Networks

    Get PDF
    Emotion recognition using deep learning methods through electroencephalogram (EEG) analysis has marked significant progress. Nevertheless, the complexities and time-intensive nature of EEG analysis present challenges. This study proposes an efficient EEG analysis method that foregoes feature extraction and sliding windows, instead employing one-dimensional Neural Networks for emotion classification. The analysis utilizes EEG signals from the Database for Emotion Analysis using Physiological Signals (DEAP) and focuses on thirteen EEG electrode positions closely associated with emotion changes. Three distinct Neural Models are explored for emotion classification: two Convolutional Neural Networks (CNN) and a combined approach using Convolutional Neural Networks and Long Short-Term Memory (CNN-LSTM). Additionally, two emotion labels are considered: four emotional ranges encompassing low arousal and low valence (LALV), low arousal and high valence (LAHV), high arousal and high valence (HAHV), and high arousal and low valence (HALV); and high valence (HV) and low valence (LV). Results demonstrate CNN_1 achieving an average accuracy of 97.7% for classifying four emotional ranges, CNN_2 with 97.1%, and CNN-LSTM reaching an impressive 99.5%. Notably, in classifying HV and LV labels, our methods attained remarkable accuracies of 100%, 98.8%, and 99.7% for CNN_1, CNN_2, and CNN-LSTM, respectively. The performance of our models surpasses that of previously reported studies, showcasing their potential as highly effective classifiers for emotion recognition using EEG signals

    Emotion recognition techniques using physiological signals and video games –Systematic review–

    Get PDF
    Emotion recognition systems from physiological signals are innovative techniques that allow studying the behavior and reaction of an individual when exposed to information that may evoke emotional reactions through multimedia tools, for example, video games. This type of approach is used to identify the behavior of an individual in different fields, such as medicine, education, psychology, etc., in order to assess the effect that the content has on the individual that is interacting with it. This article shows a systematic review of articles that report studies on emotion recognition with physiological signals and video games, between January 2010 and April 2016. We searched in eight databases, and found 15 articles that met the selection criteria. With this systematic review, we found that the use of video games as emotion stimulation tools has become an innovative field of study, due to their potential to involve stories and multimedia tools that can interact directly with the person in fields like rehabilitation. We detected clear examples where video games and physiological signal measurement became an important approach in rehabilitation processes, for example, in Posttraumatic Stress Disorder (PTSD) treatments
    • …
    corecore