15,422 research outputs found

    Investigating the use of pretrained convolutional neural network on cross-subject and cross-dataset EEG emotion recognition

    Get PDF
    The electroencephalogram (EEG) has great attraction in emotion recognition studies due to its resistance to deceptive actions of humans. This is one of the most significant advantages of brain signals in comparison to visual or speech signals in the emotion recognition context. A major challenge in EEG-based emotion recognition is that EEG recordings exhibit varying distributions for different people as well as for the same person at different time instances. This nonstationary nature of EEG limits the accuracy of it when subject independency is the priority. The aim of this study is to increase the subject-independent recognition accuracy by exploiting pretrained state-of-the-art Convolutional Neural Network (CNN) architectures. Unlike similar studies that extract spectral band power features from the EEG readings, raw EEG data is used in our study after applying windowing, pre-adjustments and normalization. Removing manual feature extraction from the training system overcomes the risk of eliminating hidden features in the raw data and helps leverage the deep neural network’s power in uncovering unknown features. To improve the classification accuracy further, a median filter is used to eliminate the false detections along a prediction interval of emotions. This method yields a mean cross-subject accuracy of 86.56% and 78.34% on the Shanghai Jiao Tong University Emotion EEG Dataset (SEED) for two and three emotion classes, respectively. It also yields a mean cross-subject accuracy of 72.81% on the Database for Emotion Analysis using Physiological Signals (DEAP) and 81.8% on the Loughborough University Multimodal Emotion Dataset (LUMED) for two emotion classes. Furthermore, the recognition model that has been trained using the SEED dataset was tested with the DEAP dataset, which yields a mean prediction accuracy of 58.1% across all subjects and emotion classes. Results show that in terms of classification accuracy, the proposed approach is superior to, or on par with, the reference subject-independent EEG emotion recognition studies identified in literature and has limited complexity due to the elimination of the need for feature extraction.<br

    An EEG-Based Multi-Modal Emotion Database With Both Posed And Authentic Facial Actions For Emotion Analysis

    Get PDF
    Emotion is an experience associated with a particular pattern of physiological activity along with different physiological, behavioral and cognitive changes. One behavioral change is facial expression, which has been studied extensively over the past few decades. Facial behavior varies with a person\u27s emotion according to differences in terms of culture, personality, age, context, and environment. In recent years, physiological activities have been used to study emotional responses. A typical signal is the electroencephalogram (EEG), which measures brain activity. Most of existing EEG-based emotion analysis has overlooked the role of facial expression changes. There exits little research on the relationship between facial behavior and brain signals due to the lack of dataset measuring both EEG and facial action signals simultaneously. To address this problem, we propose to develop a new database by collecting facial expressions, action units, and EEGs simultaneously. We recorded the EEGs and face videos of both posed facial actions and spontaneous expressions from 29 participants with different ages, genders, ethnic backgrounds. Differing from existing approaches, we designed a protocol to capture the EEG signals by evoking participants\u27 individual action units explicitly. We also investigated the relation between the EEG signals and facial action units. As a baseline, the database has been evaluated through the experiments on both posed and spontaneous emotion recognition with images alone, EEG alone, and EEG fused with images, respectively. The database will be released to the research community to advance the state of the art for automatic emotion recognition

    ERTNet: an interpretable transformer-based framework for EEG emotion recognition

    Get PDF
    BackgroundEmotion recognition using EEG signals enables clinicians to assess patients’ emotional states with precision and immediacy. However, the complexity of EEG signal data poses challenges for traditional recognition methods. Deep learning techniques effectively capture the nuanced emotional cues within these signals by leveraging extensive data. Nonetheless, most deep learning techniques lack interpretability while maintaining accuracy.MethodsWe developed an interpretable end-to-end EEG emotion recognition framework rooted in the hybrid CNN and transformer architecture. Specifically, temporal convolution isolates salient information from EEG signals while filtering out potential high-frequency noise. Spatial convolution discerns the topological connections between channels. Subsequently, the transformer module processes the feature maps to integrate high-level spatiotemporal features, enabling the identification of the prevailing emotional state.ResultsExperiments’ results demonstrated that our model excels in diverse emotion classification, achieving an accuracy of 74.23% ± 2.59% on the dimensional model (DEAP) and 67.17% ± 1.70% on the discrete model (SEED-V). These results surpass the performances of both CNN and LSTM-based counterparts. Through interpretive analysis, we ascertained that the beta and gamma bands in the EEG signals exert the most significant impact on emotion recognition performance. Notably, our model can independently tailor a Gaussian-like convolution kernel, effectively filtering high-frequency noise from the input EEG data.DiscussionGiven its robust performance and interpretative capabilities, our proposed framework is a promising tool for EEG-driven emotion brain-computer interface

    An approach to emotion recognition in single-channel EEG signals: a mother child interaction

    Get PDF
    In this work, we perform a first approach to emotion recognition from EEG single channel signals extracted in four (4) mother-child dyads experiment in developmental psychology -- Single channel EEG signals are analyzed and processed using several window sizes by performing a statistical analysis over features in the time and frequency domains -- Finally, a neural network obtained an average accuracy rate of 99% of classification in two emotional states such as happiness and sadness20th Argentinean Bioengineering Society Congress, SABI 2015 (XX Congreso Argentino de Bioingeniería y IX Jornadas de Ingeniería Clínica)28–30 October 2015, San Nicolás de los Arroyos, Argentin

    An approach to emotion recognition in single-channel EEG signals: a mother child interaction

    Get PDF
    In this work, we perform a first approach to emotion recognition from EEG single channel signals extracted in four (4) mother-child dyads experiment in developmental psychology -- Single channel EEG signals are analyzed and processed using several window sizes by performing a statistical analysis over features in the time and frequency domains -- Finally, a neural network obtained an average accuracy rate of 99% of classification in two emotional states such as happiness and sadness20th Argentinean Bioengineering Society Congress, SABI 2015 (XX Congreso Argentino de Bioingeniería y IX Jornadas de Ingeniería Clínica)28–30 October 2015, San Nicolás de los Arroyos, Argentin

    A real time classification algorithm for EEG-based BCI driven by self-induced emotions

    Get PDF
    Background and objective: The aim of this paper is to provide an efficient, parametric, general, and completely automatic real time classification method of electroencephalography (EEG) signals obtained from self-induced emotions. The particular characteristics of the considered low-amplitude signals (a self-induced emotion produces a signal whose amplitude is about 15% of a really experienced emotion) require exploring and adapting strategies like the Wavelet Transform, the Principal Component Analysis (PCA) and the Support Vector Machine (SVM) for signal processing, analysis and classification. Moreover, the method is thought to be used in a multi-emotions based Brain Computer Interface (BCI) and, for this reason, an ad hoc shrewdness is assumed. Method: The peculiarity of the brain activation requires ad-hoc signal processing by wavelet decomposition, and the definition of a set of features for signal characterization in order to discriminate different self-induced emotions. The proposed method is a two stages algorithm, completely parameterized, aiming at a multi-class classification and may be considered in the framework of machine learning. The first stage, the calibration, is off-line and is devoted at the signal processing, the determination of the features and at the training of a classifier. The second stage, the real-time one, is the test on new data. The PCA theory is applied to avoid redundancy in the set of features whereas the classification of the selected features, and therefore of the signals, is obtained by the SVM. Results: Some experimental tests have been conducted on EEG signals proposing a binary BCI, based on the self-induced disgust produced by remembering an unpleasant odor. Since in literature it has been shown that this emotion mainly involves the right hemisphere and in particular the T8 channel, the classification procedure is tested by using just T8, though the average accuracy is calculated and reported also for the whole set of the measured channels. Conclusions: The obtained classification results are encouraging with percentage of success that is, in the average for the whole set of the examined subjects, above 90%. An ongoing work is the application of the proposed procedure to map a large set of emotions with EEG and to establish the EEG headset with the minimal number of channels to allow the recognition of a significant range of emotions both in the field of affective computing and in the development of auxiliary communication tools for subjects affected by severe disabilities

    Noise Reduction of EEG Signals Using Autoencoders Built Upon GRU based RNN Layers

    Get PDF
    Understanding the cognitive and functional behaviour of the brain by its electrical activity is an important area of research. Electroencephalography (EEG) is a method that measures and record electrical activities of the brain from the scalp. It has been used for pathology analysis, emotion recognition, clinical and cognitive research, diagnosing various neurological and psychiatric disorders and for other applications. Since the EEG signals are sensitive to activities other than the brain ones, such as eye blinking, eye movement, head movement, etc., it is not possible to record EEG signals without any noise. Thus, it is very important to use an efficient noise reduction technique to get more accurate recordings. Numerous traditional techniques such as Principal Component Analysis (PCA), Independent Component Analysis (ICA), wavelet transformations and machine learning techniques were proposed for reducing the noise in EEG signals. The aim of this paper is to investigate the effectiveness of stacked autoencoders built upon Gated Recurrent Unit (GRU) based Recurrent Neural Network (RNN) layers (GRU-AE) against PCA. To achieve this, Harrell-Davis decile values for the reconstructed signals’ signal-to- noise ratio distributions were compared and it was found that the GRU-AE outperformed PCA for noise reduction of EEG signals
    • …
    corecore