4 research outputs found

    Evaluation of Machine Learning Algorithms for Emotions Recognition using Electrocardiogram

    Get PDF
    In recent studies, researchers have focused on using various modalities to recognize emotions for different applications. A major challenge is identifying emotions correctly with only electrocardiograms (ECG) as the modality. The main objective is to reduce costs by using single-modality ECG signals to predict human emotional states. This paper presents an emotion recognition approach utilizing the heart rate variability features obtained from ECG with feature selection techniques (exhaustive feature selection (EFS) and Pearson’s correlation) to train the classification models. Seven machine learning (ML) models: multi-layer perceptrons (MLP), Support Vector Machine (SVM), Decision Tree (DT), Gradient Boosting Decision Tree (GBDT), Logistic Regression, Adaboost and Extra Tree classifier are used to classify emotional state. Two public datasets, DREAMER and SWELL are used for evaluation. The results show that no particular ML works best for all data. For DREAMER with EFS, the best models to predict valence, arousal, and dominance are Extra Tree (74.6%), MLP and DT (74.6%), and GBDT and DT (69.8%), respectively. Extra tree with Pearson’s correlation are the best method for the ECG SWELL dataset and provide 100% accuracy. The usage of Extra tree classifier and feature selection technique contributes to the improvement of the model accuracy. Moreover, the Friedman test proved that ET is as good as other classification models for predicting human emotional state and ranks highest. Doi: 10.28991/ESJ-2023-07-01-011 Full Text: PD

    ERTNet: an interpretable transformer-based framework for EEG emotion recognition

    Get PDF
    BackgroundEmotion recognition using EEG signals enables clinicians to assess patients’ emotional states with precision and immediacy. However, the complexity of EEG signal data poses challenges for traditional recognition methods. Deep learning techniques effectively capture the nuanced emotional cues within these signals by leveraging extensive data. Nonetheless, most deep learning techniques lack interpretability while maintaining accuracy.MethodsWe developed an interpretable end-to-end EEG emotion recognition framework rooted in the hybrid CNN and transformer architecture. Specifically, temporal convolution isolates salient information from EEG signals while filtering out potential high-frequency noise. Spatial convolution discerns the topological connections between channels. Subsequently, the transformer module processes the feature maps to integrate high-level spatiotemporal features, enabling the identification of the prevailing emotional state.ResultsExperiments’ results demonstrated that our model excels in diverse emotion classification, achieving an accuracy of 74.23% ± 2.59% on the dimensional model (DEAP) and 67.17% ± 1.70% on the discrete model (SEED-V). These results surpass the performances of both CNN and LSTM-based counterparts. Through interpretive analysis, we ascertained that the beta and gamma bands in the EEG signals exert the most significant impact on emotion recognition performance. Notably, our model can independently tailor a Gaussian-like convolution kernel, effectively filtering high-frequency noise from the input EEG data.DiscussionGiven its robust performance and interpretative capabilities, our proposed framework is a promising tool for EEG-driven emotion brain-computer interface

    Emotion and Stress Recognition Related Sensors and Machine Learning Technologies

    Get PDF
    This book includes impactful chapters which present scientific concepts, frameworks, architectures and ideas on sensing technologies and machine learning techniques. These are relevant in tackling the following challenges: (i) the field readiness and use of intrusive sensor systems and devices for capturing biosignals, including EEG sensor systems, ECG sensor systems and electrodermal activity sensor systems; (ii) the quality assessment and management of sensor data; (iii) data preprocessing, noise filtering and calibration concepts for biosignals; (iv) the field readiness and use of nonintrusive sensor technologies, including visual sensors, acoustic sensors, vibration sensors and piezoelectric sensors; (v) emotion recognition using mobile phones and smartwatches; (vi) body area sensor networks for emotion and stress studies; (vii) the use of experimental datasets in emotion recognition, including dataset generation principles and concepts, quality insurance and emotion elicitation material and concepts; (viii) machine learning techniques for robust emotion recognition, including graphical models, neural network methods, deep learning methods, statistical learning and multivariate empirical mode decomposition; (ix) subject-independent emotion and stress recognition concepts and systems, including facial expression-based systems, speech-based systems, EEG-based systems, ECG-based systems, electrodermal activity-based systems, multimodal recognition systems and sensor fusion concepts and (x) emotion and stress estimation and forecasting from a nonlinear dynamical system perspective
    corecore