2,136 research outputs found

    Detection of emotions in Parkinson's disease using higher order spectral features from brain's electrical activity

    Get PDF
    Non-motor symptoms in Parkinson's disease (PD) involving cognition and emotion have been progressively receiving more attention in recent times. Electroencephalogram (EEG) signals, being an activity of central nervous system, can reflect the underlying true emotional state of a person. This paper presents a computational framework for classifying PD patients compared to healthy controls (HC) using emotional information from the brain's electrical activity

    CNN-XGBoost fusion-based affective state recognition using EEG spectrogram image analysis

    Get PDF
    Recognizing emotional state of human using brain signal is an active research domain with several open challenges. In this research, we propose a signal spectrogram image based CNN-XGBoost fusion method for recognising three dimensions of emotion, namely arousal (calm or excitement), valence (positive or negative feeling) and dominance (without control or empowered). We used a benchmark dataset called DREAMER where the EEG signals were collected from multiple stimulus along with self-evaluation ratings. In our proposed method, we first calculate the Short-Time Fourier Transform (STFT) of the EEG signals and convert them into RGB images to obtain the spectrograms. Then we use a two dimensional Convolutional Neural Network (CNN) in order to train the model on the spectrogram images and retrieve the features from the trained layer of the CNN using a dense layer of the neural network. We apply Extreme Gradient Boosting (XGBoost) classifier on extracted CNN features to classify the signals into arousal, valence and dominance of human emotion. We compare our results with the feature fusion-based state-of-the-art approaches of emotion recognition. To do this, we applied various feature extraction techniques on the signals which include Fast Fourier Transformation, Discrete Cosine Transformation, Poincare, Power Spectral Density, Hjorth parameters and some statistical features. Additionally, we use Chi-square and Recursive Feature Elimination techniques to select the discriminative features. We form the feature vectors by applying feature level fusion, and apply Support Vector Machine (SVM) and Extreme Gradient Boosting (XGBoost) classifiers on the fused features to classify different emotion levels. The performance study shows that the proposed spectrogram image based CNN-XGBoost fusion method outperforms the feature fusion-based SVM and XGBoost methods. The proposed method obtained the accuracy of 99.712% for arousal, 99.770% for valence and 99.770% for dominance in human emotion detection.publishedVersio

    Emotions Detection based on a Single-electrode EEG Device

    Get PDF
    The study of emotions using multiple channels of EEG represents a widespread practice in the field of research related to brain computer interfaces (Brain Computer Interfaces). To date, few studies have been reported in the literature with a reduced number of channels, which when used in the detection of emotions present results that are less accurate than the rest. To detect emotions using an EEG channel and the data obtained is useful for classifying emotions with an accuracy comparable to studies in which there is a high number of channels, is of particular interest in this research framework. This article uses the Neurosky Maindwave device; which has a single electrode to acquire the EEG signal, Matlab software and IBM SPSS Modeler; which process and classify the signals respectively. The accuracy obtained in the detection of emotions in relation to the economic resources of the hardware dedicated to the acquisition of EEG signal is remarkable

    Assessing the Effectiveness of Automated Emotion Recognition in Adults and Children for Clinical Investigation

    Get PDF
    Recent success stories in automated object or face recognition, partly fuelled by deep learning artiïŹcial neural network (ANN) architectures, has led to the advancement of biometric research platforms and, to some extent, the resurrection of ArtiïŹcial Intelligence (AI). In line with this general trend, inter-disciplinary approaches have taken place to automate the recognition of emotions in adults or children for the beneïŹt of various applications such as identiïŹcation of children emotions prior to a clinical investigation. Within this context, it turns out that automating emotion recognition is far from being straight forward with several challenges arising for both science(e.g., methodology underpinned by psychology) and technology (e.g., iMotions biometric research platform). In this paper, we present a methodology, experiment and interesting ïŹndings, which raise the following research questions for the recognition of emotions and attention in humans: a) adequacy of well-established techniques such as the International Affective Picture System (IAPS), b) adequacy of state-of-the-art biometric research platforms, c) the extent to which emotional responses may be different among children or adults. Our ïŹndings and ïŹrst attempts to answer some of these research questions, are all based on a mixed sample of adults and children, who took part in the experiment resulting into a statistical analysis of numerous variables. These are related with, both automatically and interactively, captured responses of participants to a sample of IAPS pictures

    Automatic recognition of personality profiles using EEG functional connectivity during emotional processing

    No full text
    Personality is the characteristic set of an individual’s behavioral and emotional patterns that evolve from biological and environmental factors. The recognition of personality profiles is crucial in making human−computer interaction (HCI) applications realistic, more focused, and user friendly. The ability to recognize personality using neuroscientific data underpins the neurobiological basis of personality. This paper aims to automatically recognize personality, combining scalp electroencephalogram (EEG) and machine learning techniques. As the resting state EEG has not so far been proven efficient for predicting personality, we used EEG recordings elicited during emotion processing. This study was based on data from the AMIGOS dataset reflecting the response of 37 healthy participants. Brain networks and graph theoretical parameters were extracted from cleaned EEG signals, while each trait score was dichotomized into low- and high-level using the k-means algorithm. A feature selection algorithm was used afterwards to reduce the feature-set size to the best 10 features to describe each trait separately. Support vector machines (SVM) were finally employed to classify each instance. Our method achieved a classification accuracy of 83.8% for extraversion, 86.5% for agreeableness, 83.8% for conscientiousness, 83.8% for neuroticism, and 73% for openness
    • 

    corecore