1,936 research outputs found

    A real time classification algorithm for EEG-based BCI driven by self-induced emotions

    Get PDF
    Background and objective: The aim of this paper is to provide an efficient, parametric, general, and completely automatic real time classification method of electroencephalography (EEG) signals obtained from self-induced emotions. The particular characteristics of the considered low-amplitude signals (a self-induced emotion produces a signal whose amplitude is about 15% of a really experienced emotion) require exploring and adapting strategies like the Wavelet Transform, the Principal Component Analysis (PCA) and the Support Vector Machine (SVM) for signal processing, analysis and classification. Moreover, the method is thought to be used in a multi-emotions based Brain Computer Interface (BCI) and, for this reason, an ad hoc shrewdness is assumed. Method: The peculiarity of the brain activation requires ad-hoc signal processing by wavelet decomposition, and the definition of a set of features for signal characterization in order to discriminate different self-induced emotions. The proposed method is a two stages algorithm, completely parameterized, aiming at a multi-class classification and may be considered in the framework of machine learning. The first stage, the calibration, is off-line and is devoted at the signal processing, the determination of the features and at the training of a classifier. The second stage, the real-time one, is the test on new data. The PCA theory is applied to avoid redundancy in the set of features whereas the classification of the selected features, and therefore of the signals, is obtained by the SVM. Results: Some experimental tests have been conducted on EEG signals proposing a binary BCI, based on the self-induced disgust produced by remembering an unpleasant odor. Since in literature it has been shown that this emotion mainly involves the right hemisphere and in particular the T8 channel, the classification procedure is tested by using just T8, though the average accuracy is calculated and reported also for the whole set of the measured channels. Conclusions: The obtained classification results are encouraging with percentage of success that is, in the average for the whole set of the examined subjects, above 90%. An ongoing work is the application of the proposed procedure to map a large set of emotions with EEG and to establish the EEG headset with the minimal number of channels to allow the recognition of a significant range of emotions both in the field of affective computing and in the development of auxiliary communication tools for subjects affected by severe disabilities

    Brain electrical activity discriminant analysis using Reproducing Kernel Hilbert spaces

    Get PDF
    A deep an adequate understanding of the human brain functions has been an objective for interdisciplinar teams of scientists. Different types of technological acquisition methodologies, allow to capture some particular data that is related with brain activity. Commonly, the more used strategies are related with the brain electrical activity, where reflected neuronal interactions are reflected in the scalp and obtained via electrode arrays as time series. The processing of this type of brain electrical activity (BEA) data, poses some challenges that should be addressed carefully due their intrinsic properties. BEA in known to have a nonstationaty behavior and a high degree of variability dependenig of the stimulus or responses that are being adressed..

    EEG-based multi-modal emotion recognition using bag of deep features: An optimal feature selection approach

    Get PDF
    Much attention has been paid to the recognition of human emotions with the help of electroencephalogram (EEG) signals based on machine learning technology. Recognizing emotions is a challenging task due to the non-linear property of the EEG signal. This paper presents an advanced signal processing method using the deep neural network (DNN) for emotion recognition based on EEG signals. The spectral and temporal components of the raw EEG signal are first retained in the 2D Spectrogram before the extraction of features. The pre-trained AlexNet model is used to extract the raw features from the 2D Spectrogram for each channel. To reduce the feature dimensionality, spatial, and temporal based, bag of deep features (BoDF) model is proposed. A series of vocabularies consisting of 10 cluster centers of each class is calculated using the k-means cluster algorithm. Lastly, the emotion of each subject is represented using the histogram of the vocabulary set collected from the raw-feature of a single channel. Features extracted from the proposed BoDF model have considerably smaller dimensions. The proposed model achieves better classification accuracy compared to the recently reported work when validated on SJTU SEED and DEAP data sets. For optimal classification performance, we use a support vector machine (SVM) and k-nearest neighbor (k-NN) to classify the extracted features for the different emotional states of the two data sets. The BoDF model achieves 93.8% accuracy in the SEED data set and 77.4% accuracy in the DEAP data set, which is more accurate compared to other state-of-the-art methods of human emotion recognition. - 2019 by the authors. Licensee MDPI, Basel, Switzerland.Funding: This research was funded by Higher Education Commission (HEC): Tdf/67/2017.Scopu

    Investigating the use of pretrained convolutional neural network on cross-subject and cross-dataset EEG emotion recognition

    Get PDF
    The electroencephalogram (EEG) has great attraction in emotion recognition studies due to its resistance to deceptive actions of humans. This is one of the most significant advantages of brain signals in comparison to visual or speech signals in the emotion recognition context. A major challenge in EEG-based emotion recognition is that EEG recordings exhibit varying distributions for different people as well as for the same person at different time instances. This nonstationary nature of EEG limits the accuracy of it when subject independency is the priority. The aim of this study is to increase the subject-independent recognition accuracy by exploiting pretrained state-of-the-art Convolutional Neural Network (CNN) architectures. Unlike similar studies that extract spectral band power features from the EEG readings, raw EEG data is used in our study after applying windowing, pre-adjustments and normalization. Removing manual feature extraction from the training system overcomes the risk of eliminating hidden features in the raw data and helps leverage the deep neural network’s power in uncovering unknown features. To improve the classification accuracy further, a median filter is used to eliminate the false detections along a prediction interval of emotions. This method yields a mean cross-subject accuracy of 86.56% and 78.34% on the Shanghai Jiao Tong University Emotion EEG Dataset (SEED) for two and three emotion classes, respectively. It also yields a mean cross-subject accuracy of 72.81% on the Database for Emotion Analysis using Physiological Signals (DEAP) and 81.8% on the Loughborough University Multimodal Emotion Dataset (LUMED) for two emotion classes. Furthermore, the recognition model that has been trained using the SEED dataset was tested with the DEAP dataset, which yields a mean prediction accuracy of 58.1% across all subjects and emotion classes. Results show that in terms of classification accuracy, the proposed approach is superior to, or on par with, the reference subject-independent EEG emotion recognition studies identified in literature and has limited complexity due to the elimination of the need for feature extraction.<br

    Automatic recognition of personality profiles using EEG functional connectivity during emotional processing

    No full text
    Personality is the characteristic set of an individual’s behavioral and emotional patterns that evolve from biological and environmental factors. The recognition of personality profiles is crucial in making human−computer interaction (HCI) applications realistic, more focused, and user friendly. The ability to recognize personality using neuroscientific data underpins the neurobiological basis of personality. This paper aims to automatically recognize personality, combining scalp electroencephalogram (EEG) and machine learning techniques. As the resting state EEG has not so far been proven efficient for predicting personality, we used EEG recordings elicited during emotion processing. This study was based on data from the AMIGOS dataset reflecting the response of 37 healthy participants. Brain networks and graph theoretical parameters were extracted from cleaned EEG signals, while each trait score was dichotomized into low- and high-level using the k-means algorithm. A feature selection algorithm was used afterwards to reduce the feature-set size to the best 10 features to describe each trait separately. Support vector machines (SVM) were finally employed to classify each instance. Our method achieved a classification accuracy of 83.8% for extraversion, 86.5% for agreeableness, 83.8% for conscientiousness, 83.8% for neuroticism, and 73% for openness

    Revealing Real-Time Emotional Responses: a Personalized Assessment based on Heartbeat Dynamics

    Get PDF
    Emotion recognition through computational modeling and analysis of physiological signals has been widely investigated in the last decade. Most of the proposed emotion recognition systems require relatively long-time series of multivariate records and do not provide accurate real-time characterizations using short-time series. To overcome these limitations, we propose a novel personalized probabilistic framework able to characterize the emotional state of a subject through the analysis of heartbeat dynamics exclusively. The study includes thirty subjects presented with a set of standardized images gathered from the international affective picture system, alternating levels of arousal and valence. Due to the intrinsic nonlinearity and nonstationarity of the RR interval series, a specific point-process model was devised for instantaneous identification considering autoregressive nonlinearities up to the third-order according to the Wiener-Volterra representation, thus tracking very fast stimulus-response changes. Features from the instantaneous spectrum and bispectrum, as well as the dominant Lyapunov exponent, were extracted and considered as input features to a support vector machine for classification. Results, estimating emotions each 10 seconds, achieve an overall accuracy in recognizing four emotional states based on the circumplex model of affect of 79.29%, with 79.15% on the valence axis, and 83.55% on the arousal axis
    corecore