46,306 research outputs found

    EMOTION RECOGNITION BASED ON VARIOUS PHYSIOLOGICAL SIGNALS - A REVIEW

    Get PDF
    Emotion recognition is one of the biggest challenges in human-human and human-computer interaction. There are various approaches to recognize emotions like facial expression, audio signals, body poses, and gestures etc. Physiological signals play vital role in emotion recognition as they are not controllable and are of immediate response type. In this paper, we discuss the research done on emotion recognition using skin conductance, skin temperature, electrocardiogram (ECG), electromyography (EMG), and electroencephalogram (EEG) signals. Altogether, the same methodology has been adopted for emotion recognition techniques based upon various physiological signals. After survey, it is understood that none of these methods are fully efficient standalone but the efficiency can be improved by using combination of physiological signals. The study of this paper provides an insight on the current state of research and challenges faced during emotion recognition using physiological signals, so that research can be advanced for better recognition

    Multiple Instance Learning for Emotion Recognition using Physiological Signals

    Get PDF
    The problem of continuous emotion recognition has been the subject of several studies. The proposed affective computing approaches employ sequential machine learning algorithms for improving the classification stage, accounting for the time ambiguity of emotional responses. Modeling and predicting the affective state over time is not a trivial problem because continuous data labeling is costly and not always feasible. This is a crucial issue in real-life applications, where data labeling is sparse and possibly captures only the most important events rather than the typical continuous subtle affective changes that occur. In this work, we introduce a framework from the machine learning literature called Multiple Instance Learning, which is able to model time intervals by capturing the presence or absence of relevant states, without the need to label the affective responses continuously (as required by standard sequential learning approaches). This choice offers a viable and natural solution for learning in a weakly supervised setting, taking into account the ambiguity of affective responses. We demonstrate the reliability of the proposed approach in a gold-standard scenario and towards real-world usage by employing an existing dataset (DEAP) and a purposely built one (Consumer). We also outline the advantages of this method with respect to standard supervised machine learning algorithms

    Multi-scale Entropy and Multiclass Fisher’s Linear Discriminant for Emotion Recognition Based on Multimodal Signal

    Get PDF
    Emotion recognition using physiological signals has been a special topic frequently discussed by researchers and practitioners in the past decade. However, the use of SpO2 and Pulse rate signals for emotion recognitionisvery limited and the results still showed low accuracy. It is due to the low complexity of SpO2 and Pulse rate signals characteristics. Therefore, this study proposes a Multiscale Entropy and Multiclass Fisher’s Linear Discriminant Analysis for feature extraction and dimensional reduction of these physiological signals for improving emotion recognition accuracy in elders.  In this study, the dimensional reduction process was grouped into three experimental schemes, namely a dimensional reduction using only SpO2 signals, pulse rate signals, and multimodal signals (a combination feature vectors of SpO2 and Pulse rate signals). The three schemes were then classified into three emotion classes (happy, sad, and angry emotions) using Support Vector Machine and Linear Discriminant Analysis Methods. The results showed that Support Vector Machine with the third scheme achieved optimal performance with an accuracy score of 95.24%. This result showed a significant increase of more than 22%from the previous works

    Classification of Physiological Signals for Emotion Recognition using IoT

    Get PDF
    Emotion recognition gains huge popularity now a days. Physiological signals provides an appropriate way to detect human emotion with the help of IoT. In this paper, a novel system is proposed which is capable of determining the emotional status using physiological parameters, including design specification and software implementation of the system. This system may have a vivid use in medicine (especially for emotionally challenged people), smart home etc. Various Physiological parameters to be measured includes, heart rate (HR), galvanic skin response (GSR), skin temperature etc. To construct the proposed system the measured physiological parameters were feed to the neural networks which further classify the data in various emotional states, mainly in anger, happy, sad, joy. This work recognized the correlation between human emotions and change in physiological parameters with respect to their emotion

    Cross-Subject Emotion Recognition with Sparsely-Labeled Peripheral Physiological Data Using SHAP-Explained Tree Ensembles

    Full text link
    There are still many challenges of emotion recognition using physiological data despite the substantial progress made recently. In this paper, we attempted to address two major challenges. First, in order to deal with the sparsely-labeled physiological data, we first decomposed the raw physiological data using signal spectrum analysis, based on which we extracted both complexity and energy features. Such a procedure helped reduce noise and improve feature extraction effectiveness. Second, in order to improve the explainability of the machine learning models in emotion recognition with physiological data, we proposed Light Gradient Boosting Machine (LightGBM) and SHapley Additive exPlanations (SHAP) for emotion prediction and model explanation, respectively. The LightGBM model outperformed the eXtreme Gradient Boosting (XGBoost) model on the public Database for Emotion Analysis using Physiological signals (DEAP) with f1-scores of 0.814, 0.823, and 0.860 for binary classification of valence, arousal, and liking, respectively, with cross-subject validation using eight peripheral physiological signals. Furthermore, the SHAP model was able to identify the most important features in emotion recognition, and revealed the relationships between the predictor variables and the response variables in terms of their main effects and interaction effects. Therefore, the results of the proposed model not only had good performance using peripheral physiological data, but also gave more insights into the underlying mechanisms in recognizing emotions

    Emotion Recognition with Pre-Trained Transformers Using Multimodal Signals

    Full text link
    In this paper, we address the problem of multimodal emotion recognition from multiple physiological signals. We demonstrate that a Transformer-based approach is suitable for this task. In addition, we present how such models may be pretrained in a multimodal scenario to improve emotion recognition performances. We evaluate the benefits of using multimodal inputs and pre-training with our approach on a state-ofthe-art dataset

    On-body Sensing and Signal Analysis for User Experience Recognition in Human-Machine Interaction

    Get PDF
    —In this paper, a new algorithm is proposed for recognition of user experience through emotion detection using physiological signals, for application in human-machine interaction. The algorithm recognizes user’s emotion quality and intensity in a two dimensional emotion space continuously. The continuous recognition of the user’s emotion during human-machine interaction will enable the machine to adapt its activity based on the user’s emotion in a real-time manner, thus improving user experience. The emotion model underlying the proposed algorithm is one of the most recent emotion models, which models emotion’s intensity and quality in a continuous two-dimensional space of valance and arousal axes. Using only two physiological signals, which are correlated to the valance and arousal axes of the emotion space, is among the contributions of this paper. Prediction of emotion through physiological signals has the advantage of elimination of social masking and making the prediction more reliable. The key advantage of the proposed algorithm over other algorithms presented to date is the use of the least number of modalities (only two physiological signals) to predict the quality and intensity of emotion continuously in time, and using the most recent widely accepted emotion model

    Training Pattern Classifiers with Physiological Cepstral Features to Recognise Human Emotion

    Get PDF
    The choice of a suitable set of features based on physiological signals to be utilised in enhancing the recognition of human emotion remains a burning issue in affective computing research. In this study, using the MAHNOB-HCI corpus, we extracted cepstral features from the physiological signals of galvanic skin response, electrocardiogram, electroencephalogram, skin temperature and respiration amplitude to train two state of the art pattern classifiers to recognise seven classes of human emotions. The important task of emotion recognition is largely considered a classification problem and on this basis, we carried out experiments in which the extracted physiological cepstral features were transmitted to Gaussian Radial Basis Function (RBF) neural network and Support Vector Machines (SVM) pattern classifiers for human emotion recognition. The RBF neural network pattern classifier gave the recognition accuracy of 99.5 %, while the SVM pattern classifier posted 75.0 % recognition accuracy. These results indicate the suitability of using cepstral features extracted from fused modality physiological signals with the Gaussian RBF neural network pattern classifier for efficient recognition of human emotion in affective computing system

    Finding Patterns in Biological Parameters

    Get PDF
    Changes or variation occur in physiological parameters of the body when a person is going through a tough time or he is extremely happy. These changes in physiological parameters can be used for detecting emotions. Emotional computing is a field of Human Computer Interaction(HCI) where we detect human emotions. Emotion recognition based on affective physiological changes is a pattern recognition problem, and selecting specific physiological signals is necessary and helpful to recognize the emotions. In this paper, we have discussed various research papers analysing that how emotions are detected from physiological signals using non-invasive methods. Developers use various Data Mining techniques for developing such results. Heart Rate Variability(HRV), Skin Temperature(ST), Blood Volume Pulse(BVP) are the main highlights as these are key parameters in Physiological signals

    Emotion Recognition from Electroencephalogram Signals based on Deep Neural Networks

    Get PDF
    Emotion recognition using deep learning methods through electroencephalogram (EEG) analysis has marked significant progress. Nevertheless, the complexities and time-intensive nature of EEG analysis present challenges. This study proposes an efficient EEG analysis method that foregoes feature extraction and sliding windows, instead employing one-dimensional Neural Networks for emotion classification. The analysis utilizes EEG signals from the Database for Emotion Analysis using Physiological Signals (DEAP) and focuses on thirteen EEG electrode positions closely associated with emotion changes. Three distinct Neural Models are explored for emotion classification: two Convolutional Neural Networks (CNN) and a combined approach using Convolutional Neural Networks and Long Short-Term Memory (CNN-LSTM). Additionally, two emotion labels are considered: four emotional ranges encompassing low arousal and low valence (LALV), low arousal and high valence (LAHV), high arousal and high valence (HAHV), and high arousal and low valence (HALV); and high valence (HV) and low valence (LV). Results demonstrate CNN_1 achieving an average accuracy of 97.7% for classifying four emotional ranges, CNN_2 with 97.1%, and CNN-LSTM reaching an impressive 99.5%. Notably, in classifying HV and LV labels, our methods attained remarkable accuracies of 100%, 98.8%, and 99.7% for CNN_1, CNN_2, and CNN-LSTM, respectively. The performance of our models surpasses that of previously reported studies, showcasing their potential as highly effective classifiers for emotion recognition using EEG signals
    • …
    corecore