152 research outputs found

    Mini review: Challenges in EEG emotion recognition

    Get PDF
    Electroencephalography (EEG) stands as a pioneering tool at the intersection of neuroscience and technology, offering unprecedented insights into human emotions. Through this comprehensive review, we explore the challenges and opportunities associated with EEG-based emotion recognition. While recent literature suggests promising high accuracy rates, these claims necessitate critical scrutiny for their authenticity and applicability. The article highlights the significant challenges in generalizing findings from a multitude of EEG devices and data sources, as well as the difficulties in data collection. Furthermore, the disparity between controlled laboratory settings and genuine emotional experiences presents a paradox within the paradigm of emotion research. We advocate for a balanced approach, emphasizing the importance of critical evaluation, methodological standardization, and acknowledging the dynamism of emotions for a more holistic understanding of the human emotional landscape.Postprint (published version

    EEG Based Emotion Prediction with Neural Network Models

    Get PDF
    The term "emotion" refers to an individual\u27s response to an event, person, or condition. In recent years, there has been an increase in the number of papers that have studied emotion estimation. In this study, a dataset based on three different emotions, utilized to classify feelings using EEG brainwaves, has been analysed. In the dataset, six film clips have been used to elicit positive and negative emotions from a male and a female. However, there has not been a trigger to elicit a neutral mood. Various classification approaches have been used to classify the dataset, including MLP, SVM, PNN, KNN, and decision tree methods. The Bagged Tree technique which is utilized for the first time has been achieved a 98.60 percent success rate in this study, according to the researchers. In addition, the dataset has been classified using the PNN approach, and achieved a success rate of 94.32 percent

    Multi-modal Approach for Affective Computing

    Full text link
    Throughout the past decade, many studies have classified human emotions using only a single sensing modality such as face video, electroencephalogram (EEG), electrocardiogram (ECG), galvanic skin response (GSR), etc. The results of these studies are constrained by the limitations of these modalities such as the absence of physiological biomarkers in the face-video analysis, poor spatial resolution in EEG, poor temporal resolution of the GSR etc. Scant research has been conducted to compare the merits of these modalities and understand how to best use them individually and jointly. Using multi-modal AMIGOS dataset, this study compares the performance of human emotion classification using multiple computational approaches applied to face videos and various bio-sensing modalities. Using a novel method for compensating physiological baseline we show an increase in the classification accuracy of various approaches that we use. Finally, we present a multi-modal emotion-classification approach in the domain of affective computing research.Comment: Published in IEEE 40th International Engineering in Medicine and Biology Conference (EMBC) 201

    Brain Computer Interfaces and Emotional Involvement: Theory, Research, and Applications

    Get PDF
    This reprint is dedicated to the study of brain activity related to emotional and attentional involvement as measured by Brain–computer interface (BCI) systems designed for different purposes. A BCI system can translate brain signals (e.g., electric or hemodynamic brain activity indicators) into a command to execute an action in the BCI application (e.g., a wheelchair, the cursor on the screen, a spelling device or a game). These tools have the advantage of having real-time access to the ongoing brain activity of the individual, which can provide insight into the user’s emotional and attentional states by training a classification algorithm to recognize mental states. The success of BCI systems in contemporary neuroscientific research relies on the fact that they allow one to “think outside the lab”. The integration of technological solutions, artificial intelligence and cognitive science allowed and will allow researchers to envision more and more applications for the future. The clinical and everyday uses are described with the aim to invite readers to open their minds to imagine potential further developments

    EEG-based multi-modal emotion recognition using bag of deep features: An optimal feature selection approach

    Get PDF
    Much attention has been paid to the recognition of human emotions with the help of electroencephalogram (EEG) signals based on machine learning technology. Recognizing emotions is a challenging task due to the non-linear property of the EEG signal. This paper presents an advanced signal processing method using the deep neural network (DNN) for emotion recognition based on EEG signals. The spectral and temporal components of the raw EEG signal are first retained in the 2D Spectrogram before the extraction of features. The pre-trained AlexNet model is used to extract the raw features from the 2D Spectrogram for each channel. To reduce the feature dimensionality, spatial, and temporal based, bag of deep features (BoDF) model is proposed. A series of vocabularies consisting of 10 cluster centers of each class is calculated using the k-means cluster algorithm. Lastly, the emotion of each subject is represented using the histogram of the vocabulary set collected from the raw-feature of a single channel. Features extracted from the proposed BoDF model have considerably smaller dimensions. The proposed model achieves better classification accuracy compared to the recently reported work when validated on SJTU SEED and DEAP data sets. For optimal classification performance, we use a support vector machine (SVM) and k-nearest neighbor (k-NN) to classify the extracted features for the different emotional states of the two data sets. The BoDF model achieves 93.8% accuracy in the SEED data set and 77.4% accuracy in the DEAP data set, which is more accurate compared to other state-of-the-art methods of human emotion recognition. - 2019 by the authors. Licensee MDPI, Basel, Switzerland.Funding: This research was funded by Higher Education Commission (HEC): Tdf/67/2017.Scopu

    Exploring Emotion Recognition for VR-EBT Using Deep Learning on a Multimodal Physiological Framework

    Get PDF
    Post-Traumatic Stress Disorder is a mental health condition that affects a growing number of people. A variety of PTSD treatment methods exist, however current research indicates that virtual reality exposure-based treatment has become more prominent in its use.Yet the treatment method can be costly and time consuming for clinicians and ultimately for the healthcare system. PTSD can be delivered in a more sustainable way using virtual reality. This is accomplished by using machine learning to autonomously adapt virtual reality scene changes. The use of machine learning will also support a more efficient way of inserting positive stimuli in virtual reality scenes. Machine learning has been used in medical areas such as rare diseases, oncology, medical data classification and psychiatry. This research used a public dataset that contained physiological recordings and emotional responses. The dataset was used to train a deep neural network, and a convolutional neural network to predict an individual’s valence, arousal and dominance. The results presented indicate that the deep neural network had the highest overall mean bounded regression accuracy and the lowest computational time

    Emotion classification in Parkinson's disease by higher-order spectra and power spectrum features using EEG signals: A comparative study

    Get PDF
    Deficits in the ability to process emotions characterize several neuropsychiatric disorders and are traits of Parkinson's disease (PD), and there is need for a method of quantifying emotion, which is currently performed by clinical diagnosis. Electroencephalogram (EEG) signals, being an activity of central nervous system (CNS), can reflect the underlying true emotional state of a person. This study applied machine-learning algorithms to categorize EEG emotional states in PD patients that would classify six basic emotions (happiness and sadness, fear, anger, surprise and disgust) in comparison with healthy controls (HC). Emotional EEG data were recorded from 20 PD patients and 20 healthy age-, education level- and sex-matched controls using multimodal (audio-visual) stimuli. The use of nonlinear features motivated by the higher-order spectra (HOS) has been reported to be a promising approach to classify the emotional states. In this work, we made the comparative study of the performance of k-nearest neighbor (kNN) and support vector machine (SVM) classifiers using the features derived from HOS and from the power spectrum. Analysis of variance (ANOVA) showed that power spectrum and HOS based features were statistically significant among the six emotional states (p < 0.0001). Classification results shows that using the selected HOS based features instead of power spectrum based features provided comparatively better accuracy for all the six classes with an overall accuracy of 70.10% ± 2.83% and 77.29% ± 1.73% for PD patients and HC in beta (13-30 Hz) band using SVM classifier. Besides, PD patients achieved less accuracy in the processing of negative emotions (sadness, fear, anger and disgust) than in processing of positive emotions (happiness, surprise) compared with HC. These results demonstrate the effectiveness of applying machine learning techniques to the classification of emotional states in PD patients in a user independent manner using EEG signals. The accuracy of the system can be improved by investigating the other HOS based features. This study might lead to a practical system for noninvasive assessment of the emotional impairments associated with neurological disorders

    EEG based Major Depressive disorder and Bipolar disorder detection using Neural Networks: A review

    Full text link
    Mental disorders represent critical public health challenges as they are leading contributors to the global burden of disease and intensely influence social and financial welfare of individuals. The present comprehensive review concentrate on the two mental disorders: Major depressive Disorder (MDD) and Bipolar Disorder (BD) with noteworthy publications during the last ten years. There is a big need nowadays for phenotypic characterization of psychiatric disorders with biomarkers. Electroencephalography (EEG) signals could offer a rich signature for MDD and BD and then they could improve understanding of pathophysiological mechanisms underling these mental disorders. In this review, we focus on the literature works adopting neural networks fed by EEG signals. Among those studies using EEG and neural networks, we have discussed a variety of EEG based protocols, biomarkers and public datasets for depression and bipolar disorder detection. We conclude with a discussion and valuable recommendations that will help to improve the reliability of developed models and for more accurate and more deterministic computational intelligence based systems in psychiatry. This review will prove to be a structured and valuable initial point for the researchers working on depression and bipolar disorders recognition by using EEG signals.Comment: 29 pages,2 figures and 18 Table

    Hybrid Classification Model for Emotion Prediction from EEG Signals: A Comparative Study

    Get PDF
    This paper introduces a novel hybrid algorithm for emotion classification based on electroencephalogram (EEG) signals. The proposed hybrid model consists of two layers: the first layer includes three parallel adaptive neuro-fuzzy inference systems (ANFIS), and the second layer called the adaptive network comprises various models such as radial basis function neural network (RBFNN), probabilistic neural network (PNN), and ANFIS. It is examined that the feature distribution graphs of the dataset, which includes three emotion classes: positive, negative, and neutral, and selected the most appropriate features for classification. The three parallel ANFIS structures were trained using the selected features as input vectors, and the outputs of these models were combined to obtain a new feature vector. This feature vector was then used as the input to the adaptive network, which produced the output of emotion prediction. In addition, it is evaluated the accuracy of the network trained using only the first features of the dataset. The hybrid structure was designed to enhance the system&amp;#39;s performance, and the best accuracy result of 96.51% was achieved using the ANFIS-ANFIS model. Overall, this study provides a promising approach for emotion classification based on EEG signals.&amp;nbsp

    Revealing Real-Time Emotional Responses: a Personalized Assessment based on Heartbeat Dynamics

    Get PDF
    Emotion recognition through computational modeling and analysis of physiological signals has been widely investigated in the last decade. Most of the proposed emotion recognition systems require relatively long-time series of multivariate records and do not provide accurate real-time characterizations using short-time series. To overcome these limitations, we propose a novel personalized probabilistic framework able to characterize the emotional state of a subject through the analysis of heartbeat dynamics exclusively. The study includes thirty subjects presented with a set of standardized images gathered from the international affective picture system, alternating levels of arousal and valence. Due to the intrinsic nonlinearity and nonstationarity of the RR interval series, a specific point-process model was devised for instantaneous identification considering autoregressive nonlinearities up to the third-order according to the Wiener-Volterra representation, thus tracking very fast stimulus-response changes. Features from the instantaneous spectrum and bispectrum, as well as the dominant Lyapunov exponent, were extracted and considered as input features to a support vector machine for classification. Results, estimating emotions each 10 seconds, achieve an overall accuracy in recognizing four emotional states based on the circumplex model of affect of 79.29%, with 79.15% on the valence axis, and 83.55% on the arousal axis
    • …
    corecore