1,714 research outputs found

    Optimal set of EEG features for emotional state classification and trajectory visualization in Parkinson's disease

    Get PDF
    In addition to classic motor signs and symptoms, individuals with Parkinson's disease (PD) are characterized by emotional deficits. Ongoing brain activity can be recorded by electroencephalograph (EEG) to discover the links between emotional states and brain activity. This study utilized machine-learning algorithms to categorize emotional states in PD patients compared with healthy controls (HC) using EEG. Twenty non-demented PD patients and 20 healthy age-, gender-, and education level-matched controls viewed happiness, sadness, fear, anger, surprise, and disgust emotional stimuli while fourteen-channel EEG was being recorded. Multimodal stimulus (combination of audio and visual) was used to evoke the emotions. To classify the EEG-based emotional states and visualize the changes of emotional states over time, this paper compares four kinds of EEG features for emotional state classification and proposes an approach to track the trajectory of emotion changes with manifold learning. From the experimental results using our EEG data set, we found that (a) bispectrum feature is superior to other three kinds of features, namely power spectrum, wavelet packet and nonlinear dynamical analysis; (b) higher frequency bands (alpha, beta and gamma) play a more important role in emotion activities than lower frequency bands (delta and theta) in both groups and; (c) the trajectory of emotion changes can be visualized by reducing subject-independent features with manifold learning. This provides a promising way of implementing visualization of patient's emotional state in real time and leads to a practical system for noninvasive assessment of the emotional impairments associated with neurological disorders

    Spectro-spatial Profile for Gender Identification using Emotional-based EEG Signals

    Get PDF
     Identifying gender has become essential specially to support automatic human-computer interface applications and to customize interactions based on affective responses. The electroencephalogram (EEG) has been adopted for recording the neuronal information as waveforms from the scalp. The objective of this study was twofold. First, to identify genders from four different emotional states using spectral relative power biomarkers. Second, to develop Spectro-spatial profiles that afford additional information for gender identification using emotional-based EEGs. The dataset has been collected from ten healthful volunteer students from the University of Vienna while watching short emotional audio-visual clips of angry, happiness, sadness, and neutral emotions. Wavelet (WT) has been used as a denoising technique, the spectral relative power features of delta (), theta (), alpha (), beta () and gamma () were extracted from each recorded EEG channel. In the subsequent steps, analysis of variance (ANOVA) and Pearson’s correlation analysis were performed to characterize the emotional-based EEG biomarkers towards developing the Spectro-spatial profile to identify gender differences. The results show that the spectral set of features may provide and convey reliable biomarkers for identifying Spectro-spatial profiles from four different emotional states. EEG biomarkers and profiles enable more comprehensive insights into various human behavior effects and as an intervention on the brain. The results revealed that almost high relative powers from all emotional states appear in females compared to males. Particularly,  was the most prominent for anger,  and  were widely observed in happiness,  was the most appears in sadness,  and  were the powers that appears widely in neutral. Moreover, in females, neut was correlated with and _ang, _neut was mostly correlated with _ang. Besides, _neut was correlated with _ang, _neut was correlated with _ang, _neut was mostly correlated with _sad. Moreover, in males, _neut showed a very strong correlation with _sadness whereas _neut was correlated with _hap and _neut was correlated with _hap. Therefore, the proposed system using the WT denoising method, spectral relative power markers, and the spectro-spatial profile plays a crucial role in characterizing the emotional-based EEGs towards gender identification. The classification results were 89.46% for SVM and 90% for the KNN. Therefore, the proposed system using the WT denoising method, spectral relative powers features, SVM, and KNN classifiers were crucial in gender identification and characterizing the emotional EEG signals

    Integration of Wavelet and Recurrence Quantification Analysis in Emotion Recognition of Bilinguals

    Get PDF
    Background: This study offers a robust framework for the classification of autonomic signals into five affective states during the picture viewing. To this end, the following emotion categories studied: five classes of the arousal-valence plane (5C), three classes of arousal (3A), and three categories of valence (3V). For the first time, the linguality information also incorporated into the recognition procedure. Precisely, the main objective of this paper was to present a fundamental approach for evaluating and classifying the emotions of monolingual and bilingual college students.Methods: Utilizing the nonlinear dynamics, the recurrence quantification measures of the wavelet coefficients extracted. To optimize the feature space, different feature selection approaches, including generalized discriminant analysis (GDA), principal component analysis (PCA), kernel PCA, and linear discriminant analysis (LDA), were examined. Finally, considering linguality information, the classification was performed using a probabilistic neural network (PNN).Results: Using LDA and the PNN, the highest recognition rates of 95.51%, 95.7%, and 95.98% were attained for the 5C, 3A, and 3V, respectively. Considering the linguality information, a further improvement of the classification rates accomplished.Conclusion: The proposed methodology can provide a valuable tool for discriminating affective states in practical applications within the area of human-computer interfaces

    Greedy Search for Descriptive Spatial Face Features

    Full text link
    Facial expression recognition methods use a combination of geometric and appearance-based features. Spatial features are derived from displacements of facial landmarks, and carry geometric information. These features are either selected based on prior knowledge, or dimension-reduced from a large pool. In this study, we produce a large number of potential spatial features using two combinations of facial landmarks. Among these, we search for a descriptive subset of features using sequential forward selection. The chosen feature subset is used to classify facial expressions in the extended Cohn-Kanade dataset (CK+), and delivered 88.7% recognition accuracy without using any appearance-based features.Comment: International Conference on Acoustics, Speech and Signal Processing (ICASSP), 201

    Data-driven multivariate and multiscale methods for brain computer interface

    Get PDF
    This thesis focuses on the development of data-driven multivariate and multiscale methods for brain computer interface (BCI) systems. The electroencephalogram (EEG), the most convenient means to measure neurophysiological activity due to its noninvasive nature, is mainly considered. The nonlinearity and nonstationarity inherent in EEG and its multichannel recording nature require a new set of data-driven multivariate techniques to estimate more accurately features for enhanced BCI operation. Also, a long term goal is to enable an alternative EEG recording strategy for achieving long-term and portable monitoring. Empirical mode decomposition (EMD) and local mean decomposition (LMD), fully data-driven adaptive tools, are considered to decompose the nonlinear and nonstationary EEG signal into a set of components which are highly localised in time and frequency. It is shown that the complex and multivariate extensions of EMD, which can exploit common oscillatory modes within multivariate (multichannel) data, can be used to accurately estimate and compare the amplitude and phase information among multiple sources, a key for the feature extraction of BCI system. A complex extension of local mean decomposition is also introduced and its operation is illustrated on two channel neuronal spike streams. Common spatial pattern (CSP), a standard feature extraction technique for BCI application, is also extended to complex domain using the augmented complex statistics. Depending on the circularity/noncircularity of a complex signal, one of the complex CSP algorithms can be chosen to produce the best classification performance between two different EEG classes. Using these complex and multivariate algorithms, two cognitive brain studies are investigated for more natural and intuitive design of advanced BCI systems. Firstly, a Yarbus-style auditory selective attention experiment is introduced to measure the user attention to a sound source among a mixture of sound stimuli, which is aimed at improving the usefulness of hearing instruments such as hearing aid. Secondly, emotion experiments elicited by taste and taste recall are examined to determine the pleasure and displeasure of a food for the implementation of affective computing. The separation between two emotional responses is examined using real and complex-valued common spatial pattern methods. Finally, we introduce a novel approach to brain monitoring based on EEG recordings from within the ear canal, embedded on a custom made hearing aid earplug. The new platform promises the possibility of both short- and long-term continuous use for standard brain monitoring and interfacing applications

    Investigation of window size in classification of EEG-emotion signal with wavelet entropy and support vector machine

    Full text link
    © 2015 IEEE. When dealing with patients with psychological or emotional symptoms, medical practitioners are often faced with the problem of objectively recognizing their patients' emotional state. In this paper, we approach this problem using a computer program that automatically extracts emotions from EEG signals. We extend the finding of Koelstra et. al [IEEE trans. affective comput., vol. 3, no. 1, pp. 18-31, 2012] using the same dataset (i.e. the DEAP: dataset for emotion analysis using electroencephalogram, physiological and video signals), where we observed that the accuracy can be further improved using wavelet features extracted from shorter time segments. More precisely, we achieved accuracy of 65% for both valence and arousal using the wavelet entropy of 3 to 12 seconds signal segments. This improvement in accuracy entails an important discovery that information on emotions contained in the EEG signal may be better described in term of wavelets and in shorter time segments

    Emotion recognition using electroencephalogram signal

    Get PDF
    Emotion play an essential role in human’s life and it is not consciously controlled. Some of the emotion can be easily expressed by facial expressions, speech, behavior and gesture but some are not. This study investigates the emotion recognition using electroencephalogram (EEG) signal. Undoubtedly, EEG signals can detect human brain activity accurately with high resolution data acquisition device as compared to other biological signals. Changes in the human brain’s electrical activity occur very quickly, thus a high resolution device is required to determine the emotion precisely. In this study, we will prove the strength and reliability of EEG signals as an emotion recognition mechanism for four different emotions which are happy, sad, fear and calm. Data of six different subjects were collected by using BrainMarker EXG device which consist of 19 channels. The pre-processing stage was performed using second order of low pass Butterworth filter to remove the unwanted signals. Then, two ranges of frequency bands were extracted from the signals which are alpha and beta. Finally, these samples will be classified using MLP Neural Network. Classification accuracy up to 91% is achieved and the average percentage of accuracy for calm, fear, happy and sad are 83.5%, 87.3%, 85.83% and 87.6% respectively. Thus, a proof of concept, this study has been capable of proposing a system of recognizing four states of emotion which are happy, sad, fear and calm by using EEG signal

    A COMPUTATION METHOD/FRAMEWORK FOR HIGH LEVEL VIDEO CONTENT ANALYSIS AND SEGMENTATION USING AFFECTIVE LEVEL INFORMATION

    No full text
    VIDEO segmentation facilitates e±cient video indexing and navigation in large digital video archives. It is an important process in a content-based video indexing and retrieval (CBVIR) system. Many automated solutions performed seg- mentation by utilizing information about the \facts" of the video. These \facts" come in the form of labels that describe the objects which are captured by the cam- era. This type of solutions was able to achieve good and consistent results for some video genres such as news programs and informational presentations. The content format of this type of videos is generally quite standard, and automated solutions were designed to follow these format rules. For example in [1], the presence of news anchor persons was used as a cue to determine the start and end of a meaningful news segment. The same cannot be said for video genres such as movies and feature films. This is because makers of this type of videos utilized different filming techniques to design their videos in order to elicit certain affective response from their targeted audience. Humans usually perform manual video segmentation by trying to relate changes in time and locale to discontinuities in meaning [2]. As a result, viewers usually have doubts about the boundary locations of a meaningful video segment due to their different affective responses. This thesis presents an entirely new view to the problem of high level video segmentation. We developed a novel probabilistic method for affective level video content analysis and segmentation. Our method had two stages. In the first stage, a®ective content labels were assigned to video shots by means of a dynamic bayesian 0. Abstract 3 network (DBN). A novel hierarchical-coupled dynamic bayesian network (HCDBN) topology was proposed for this stage. The topology was based on the pleasure- arousal-dominance (P-A-D) model of a®ect representation [3]. In principle, this model can represent a large number of emotions. In the second stage, the visual, audio and a®ective information of the video was used to compute a statistical feature vector to represent the content of each shot. Affective level video segmentation was achieved by applying spectral clustering to the feature vectors. We evaluated the first stage of our proposal by comparing its emotion detec- tion ability with all the existing works which are related to the field of a®ective video content analysis. To evaluate the second stage, we used the time adaptive clustering (TAC) algorithm as our performance benchmark. The TAC algorithm was the best high level video segmentation method [2]. However, it is a very computationally intensive algorithm. To accelerate its computation speed, we developed a modified TAC (modTAC) algorithm which was designed to be mapped easily onto a field programmable gate array (FPGA) device. Both the TAC and modTAC algorithms were used as performance benchmarks for our proposed method. Since affective video content is a perceptual concept, the segmentation per- formance and human agreement rates were used as our evaluation criteria. To obtain our ground truth data and viewer agreement rates, a pilot panel study which was based on the work of Gross et al. [4] was conducted. Experiment results will show the feasibility of our proposed method. For the first stage of our proposal, our experiment results will show that an average improvement of as high as 38% was achieved over previous works. As for the second stage, an improvement of as high as 37% was achieved over the TAC algorithm
    corecore