14 research outputs found

    Mini review: Challenges in EEG emotion recognition

    Get PDF
    Electroencephalography (EEG) stands as a pioneering tool at the intersection of neuroscience and technology, offering unprecedented insights into human emotions. Through this comprehensive review, we explore the challenges and opportunities associated with EEG-based emotion recognition. While recent literature suggests promising high accuracy rates, these claims necessitate critical scrutiny for their authenticity and applicability. The article highlights the significant challenges in generalizing findings from a multitude of EEG devices and data sources, as well as the difficulties in data collection. Furthermore, the disparity between controlled laboratory settings and genuine emotional experiences presents a paradox within the paradigm of emotion research. We advocate for a balanced approach, emphasizing the importance of critical evaluation, methodological standardization, and acknowledging the dynamism of emotions for a more holistic understanding of the human emotional landscape.Postprint (published version

    Feature Fusion-Based Capsule Network for Cross-Subject Mental Workload Classification

    Get PDF
    In a complex human-computer interaction system, estimating mental workload based on electroencephalogram (EEG) plays a vital role in the system adaption in accordance with users’ mental state. Compared to within-subject classification, cross-subject classification is more challenging due to larger variation across subjects. In this paper, we targeted the cross-subject mental workload classification and attempted to improve the performance. A capsule network capturing structural relationships between features of power spectral density and brain connectivity was proposed. The comparison results showed that it achieved a cross-subject classification accuracy of 45.11%, which was superior to the compared methods (e.g., convolutional neural network and support vector machine). The results also demonstrated feature fusion positively contributed to the cross-subject workload classification. Our study could benefit the future development of a real-time workload detection system unspecific to subjects

    Oversampling Approach Using Radius-SMOTE for Imbalance Electroencephalography Datasets

    Get PDF
    Several studies related to emotion recognition based on Electroencephalogram signals have been carried out in feature extraction, feature representation, and classification. However, emotion recognition is strongly influenced by the distribution or balance of Electroencephalogram data. On the other hand, the limited data obtained significantly affects the imbalance condition of the resulting Electroencephalogram signal data. It has an impact on the low accuracy of emotion recognition. Therefore, based on these problems, the contribution of this research is to propose the Radius SMOTE method to overcome the imbalance of the DEAP dataset in the emotion recognition process. In addition to the EEG data oversampling process, there are several vital processes in emotion recognition based on EEG signals, including the feature extraction process and the emotion classification process. This study uses the Differential Entropy (DE) method in the EEG feature extraction process. The classification process in this study compares two classification methods, namely the Decision Tree method and the Convolutional Neural Network method. Based on the classification process using the Decision Tree method, the application of oversampling with the Radius SMOTE method resulted in the accuracy of recognizing arousal and valence emotions of 78.78% and 75.14%, respectively. Meanwhile, the Convolutional Neural Network method can accurately identify the arousal and valence emotions of 82.10% and 78.99%, respectively. Doi: 10.28991/ESJ-2022-06-02-013 Full Text: PD

    The challenges of emotion recognition methods based on electroencephalogram signals: a literature review

    Get PDF
    Electroencephalogram (EEG) signals in recognizing emotions have several advantages. Still, the success of this study, however, is strongly influenced by: i) the distribution of the data used, ii) consider of differences in participant characteristics, and iii) consider the characteristics of the EEG signals. In response to these issues, this study will examine three important points that affect the success of emotion recognition packaged in several research questions: i) What factors need to be considered to generate and distribute EEG data?, ii) How can EEG signals be generated with consideration of differences in participant characteristics?, and iii) How do EEG signals with characteristics exist among its features for emotion recognition? The results, therefore, indicate some important challenges to be studied further in EEG signals-based emotion recognition research. These include i) determine robust methods for imbalanced EEG signals data, ii) determine the appropriate smoothing method to eliminate disturbances on the baseline signals, iii) determine the best baseline reduction methods to reduce the differences in the characteristics of the participants on the EEG signals, iv) determine the robust architecture of the capsule network method to overcome the loss of knowledge information and apply it in more diverse data set

    Modified Weighted Mean Filter to Improve the Baseline Reduction Approach for Emotion Recognition

    Get PDF
    Participants' emotional reactions are strongly influenced by several factors such as personality traits, intellectual abilities, and gender. Several studies have examined the baseline reduction approach for emotion recognition using electroencephalogram signal patterns containing external and internal interferences, which prevented it from representing participants’ neutral state. Therefore, this study proposes two solutions to overcome this problem. Firstly, it offers a modified weighted mean filter method to eliminate the interference of the electroencephalogram baseline signal. Secondly, it determines an appropriate baseline reduction method to characterize emotional reactions after the smoothing process. Data collected from four scenarios conducted on three datasets was used to reduce the interference and amplitude of the electroencephalogram signals. The result showed that the smoothing process can eliminate interference and lower the signal's amplitude. Based on the three baseline reduction methods, the Relative Difference method is appropriate for characterizing emotional reactions in different electroencephalogram signal patterns and has higher accuracy. Based on testing on the DEAP dataset, these proposed methods achieved accuracies of 97.14, 99.70, and 96.70% for the four categories of emotions, the two categories of arousal, and the two categories of valence, respectively. Furthermore, on the DREAMER dataset, these proposed methods achieved accuracies of 89.71, 97.63, and 96.58% for the four categories of emotions, the two categories of arousal, and the two categories of valence, respectively. Finally, on the AMIGOS dataset, these proposed methods achieved accuracies of 99.59, 98.20, and 99.96% for the four categories of emotions, the two categories of arousal, and the two categories of valence, respectively. Doi: 10.28991/ESJ-2022-06-06-03 Full Text: PD

    CDBA: a novel multi-branch feature fusion model for EEG-based emotion recognition

    Get PDF
    EEG-based emotion recognition through artificial intelligence is one of the major areas of biomedical and machine learning, which plays a key role in understanding brain activity and developing decision-making systems. However, the traditional EEG-based emotion recognition is a single feature input mode, which cannot obtain multiple feature information, and cannot meet the requirements of intelligent and high real-time brain computer interface. And because the EEG signal is nonlinear, the traditional methods of time domain or frequency domain are not suitable. In this paper, a CNN-DSC-Bi-LSTM-Attention (CDBA) model based on EEG signals for automatic emotion recognition is presented, which contains three feature-extracted channels. The normalized EEG signals are used as an input, the feature of which is extracted by multi-branching and then concatenated, and each channel feature weight is assigned through the attention mechanism layer. Finally, Softmax was used to classify EEG signals. To evaluate the performance of the proposed CDBA model, experiments were performed on SEED and DREAMER datasets, separately. The validation experimental results show that the proposed CDBA model is effective in classifying EEG emotions. For triple-category (positive, neutral and negative) and four-category (happiness, sadness, fear and neutrality), the classification accuracies were respectively 99.44% and 99.99% on SEED datasets. For five classification (Valence 1—Valence 5) on DREAMER datasets, the accuracy is 84.49%. To further verify and evaluate the model accuracy and credibility, the multi-classification experiments based on ten-fold cross-validation were conducted, the elevation indexes of which are all higher than other models. The results show that the multi-branch feature fusion deep learning model based on attention mechanism has strong fitting and generalization ability and can solve nonlinear modeling problems, so it is an effective emotion recognition method. Therefore, it is helpful to the diagnosis and treatment of nervous system diseases, and it is expected to be applied to emotion-based brain computer interface systems

    Continuous Capsule Network Method for Improving Electroencephalogram-Based Emotion Recognition

    Get PDF
    The convolution process in the Capsule Network method can result in a loss of spatial data from the Electroencephalogram signal, despite its ability to characterize spatial information from Electroencephalogram signals. Therefore, this study applied the Continuous Capsule Network method to overcome problems associated with emotion recognition based on Electroencephalogram signals using the optimal architecture of the (1) 1st, 2nd, 3rd, and 4th Continuous Convolution layers with values of 64, 128, 256, and 64, respectively, and (2) kernel sizes of 2×2×4, 2×2×64, and 2×2×128 for the 1st, 2nd, and 3rd Continuous Convolution layers, and 1×1×256 for the 4th. Several methods were also used to support the Continuous Capsule Network process, such as the Differential Entropy and 3D Cube methods for the feature extraction and representation processes. These methods were chosen based on their ability to characterize spatial and low-frequency information from Electroencephalogram signals. By testing the DEAP dataset, these proposed methods achieved accuracies of 91.35, 93.67, and 92.82% for the four categories of emotions, two categories of arousal, and valence, respectively. Furthermore, on the DREAMER dataset, these proposed methods achieved accuracies of 94.23, 96.66, and 96.05% for the four categories of emotions, the two categories of arousal, and valence, respectively. Finally, on the AMIGOS dataset, these proposed methods achieved accuracies of 96.20, 97.96, and 97.32% for the four categories of emotions, the two categories of arousal, and valence, respectively. Doi: 10.28991/ESJ-2023-07-01-09 Full Text: PD

    ERTNet: an interpretable transformer-based framework for EEG emotion recognition

    Get PDF
    BackgroundEmotion recognition using EEG signals enables clinicians to assess patients’ emotional states with precision and immediacy. However, the complexity of EEG signal data poses challenges for traditional recognition methods. Deep learning techniques effectively capture the nuanced emotional cues within these signals by leveraging extensive data. Nonetheless, most deep learning techniques lack interpretability while maintaining accuracy.MethodsWe developed an interpretable end-to-end EEG emotion recognition framework rooted in the hybrid CNN and transformer architecture. Specifically, temporal convolution isolates salient information from EEG signals while filtering out potential high-frequency noise. Spatial convolution discerns the topological connections between channels. Subsequently, the transformer module processes the feature maps to integrate high-level spatiotemporal features, enabling the identification of the prevailing emotional state.ResultsExperiments’ results demonstrated that our model excels in diverse emotion classification, achieving an accuracy of 74.23% ± 2.59% on the dimensional model (DEAP) and 67.17% ± 1.70% on the discrete model (SEED-V). These results surpass the performances of both CNN and LSTM-based counterparts. Through interpretive analysis, we ascertained that the beta and gamma bands in the EEG signals exert the most significant impact on emotion recognition performance. Notably, our model can independently tailor a Gaussian-like convolution kernel, effectively filtering high-frequency noise from the input EEG data.DiscussionGiven its robust performance and interpretative capabilities, our proposed framework is a promising tool for EEG-driven emotion brain-computer interface

    Cross-subject EEG-based emotion recognition through dynamic optimization of random forest with sparrow search algorithm

    Get PDF
    The objective of EEG-based emotion recognition is to classify emotions by decoding signals, with potential applications in the fields of artificial intelligence and bioinformatics. Cross-subject emotion recognition is more difficult than intra-subject emotion recognition. The poor adaptability of classification model parameters is a significant factor of low accuracy in cross-subject emotion recognition. We propose a model of a dynamically optimized Random Forest based on the Sparrow Search Algorithm (SSA-RF). The decision trees number (DTN) and the leave minimum number (LMN) of the RF are dynamically optimized by the SSA. 12 features are used to construct feature combinations for selecting the optimal feature combination. DEAP and SEED datasets are employed for testing the performance of SSA-RF. The experimental results show that the accuracy of binary classification is 76.81% on DEAP, and the accuracy of triple classification is 75.96% on SEED based on SSA-RF, which are both higher than that of traditional RF. This study provides new insights for the development of cross-subject emotion recognition, and has significant theoretical value

    Feature selection model based on EEG signals for assessing the cognitive workload in drivers

    Get PDF
    In recent years, research has focused on generating mechanisms to assess the levels of subjects’ cognitive workload when performing various activities that demand high concentration levels, such as driving a vehicle. These mechanisms have implemented several tools for analyzing the cognitive workload, and electroencephalographic (EEG) signals have been most frequently used due to their high precision. However, one of the main challenges in implementing the EEG signals is finding appropriate information for identifying cognitive states. Here, we present a new feature selection model for pattern recognition using information from EEG signals based on machine learning techniques called GALoRIS. GALoRIS combines Genetic Algorithms and Logistic Regression to create a new fitness function that identifies and selects the critical EEG features that contribute to recognizing high and low cognitive workloads and structures a new dataset capable of optimizing the model’s predictive process. We found that GALoRIS identifies data related to high and low cognitive workloads of subjects while driving a vehicle using information extracted from multiple EEG signals, reducing the original dataset by more than 50% and maximizing the model’s predictive capacity, achieving a precision rate greater than 90%.This work has been funded by the Ministry of Science, Innovation and Universities of Spain under grant number TRA2016-77012-RPeer ReviewedPostprint (published version
    corecore