28 research outputs found

    Emotion classification in Parkinson's disease by higher-order spectra and power spectrum features using EEG signals: A comparative study

    Get PDF
    Deficits in the ability to process emotions characterize several neuropsychiatric disorders and are traits of Parkinson's disease (PD), and there is need for a method of quantifying emotion, which is currently performed by clinical diagnosis. Electroencephalogram (EEG) signals, being an activity of central nervous system (CNS), can reflect the underlying true emotional state of a person. This study applied machine-learning algorithms to categorize EEG emotional states in PD patients that would classify six basic emotions (happiness and sadness, fear, anger, surprise and disgust) in comparison with healthy controls (HC). Emotional EEG data were recorded from 20 PD patients and 20 healthy age-, education level- and sex-matched controls using multimodal (audio-visual) stimuli. The use of nonlinear features motivated by the higher-order spectra (HOS) has been reported to be a promising approach to classify the emotional states. In this work, we made the comparative study of the performance of k-nearest neighbor (kNN) and support vector machine (SVM) classifiers using the features derived from HOS and from the power spectrum. Analysis of variance (ANOVA) showed that power spectrum and HOS based features were statistically significant among the six emotional states (p < 0.0001). Classification results shows that using the selected HOS based features instead of power spectrum based features provided comparatively better accuracy for all the six classes with an overall accuracy of 70.10% ± 2.83% and 77.29% ± 1.73% for PD patients and HC in beta (13-30 Hz) band using SVM classifier. Besides, PD patients achieved less accuracy in the processing of negative emotions (sadness, fear, anger and disgust) than in processing of positive emotions (happiness, surprise) compared with HC. These results demonstrate the effectiveness of applying machine learning techniques to the classification of emotional states in PD patients in a user independent manner using EEG signals. The accuracy of the system can be improved by investigating the other HOS based features. This study might lead to a practical system for noninvasive assessment of the emotional impairments associated with neurological disorders

    Emotion Recognition using Wireless Signals

    Get PDF
    This paper demonstrates a new technology that can infer a person's emotions from RF signals reflected off his body. EQ-Radio transmits an RF signal and analyzes its reflections off a person's body to recognize his emotional state (happy, sad, etc.). The key enabler underlying EQ-Radio is a new algorithm for extracting the individual heartbeats from the wireless signal at an accuracy comparable to on-body ECG monitors. The resulting beats are then used to compute emotion-dependent features which feed a machine-learning emotion classifier. We describe the design and implementation of EQ-Radio, and demonstrate through a user study that its emotion recognition accuracy is on par with state-of-the-art emotion recognition systems that require a person to be hooked to an ECG monitor. Keywords: Wireless Signals; Wireless Sensing; Emotion Recognition; Affective Computing; Heart Rate VariabilityNational Science Foundation (U.S.)United States. Air Forc

    Inter-hemispheric EEG coherence analysis in Parkinson's disease : Assessing brain activity during emotion processing

    Get PDF
    Parkinson’s disease (PD) is not only characterized by its prominent motor symptoms but also associated with disturbances in cognitive and emotional functioning. The objective of the present study was to investigate the influence of emotion processing on inter-hemispheric electroencephalography (EEG) coherence in PD. Multimodal emotional stimuli (happiness, sadness, fear, anger, surprise, and disgust) were presented to 20 PD patients and 30 age-, education level-, and gender-matched healthy controls (HC) while EEG was recorded. Inter-hemispheric coherence was computed from seven homologous EEG electrode pairs (AF3–AF4, F7–F8, F3–F4, FC5–FC6, T7–T8, P7–P8, and O1–O2) for delta, theta, alpha, beta, and gamma frequency bands. In addition, subjective ratings were obtained for a representative of emotional stimuli. Interhemispherically, PD patients showed significantly lower coherence in theta, alpha, beta, and gamma frequency bands than HC during emotion processing. No significant changes were found in the delta frequency band coherence. We also found that PD patients were more impaired in recognizing negative emotions (sadness, fear, anger, and disgust) than relatively positive emotions (happiness and surprise). Behaviorally, PD patients did not show impairment in emotion recognition as measured by subjective ratings. These findings suggest that PD patients may have an impairment of inter-hemispheric functional connectivity (i.e., a decline in cortical connectivity) during emotion processing. This study may increase the awareness of EEG emotional response studies in clinical practice to uncover potential neurophysiologic abnormalities

    Multimodal database of emotional speech, video and gestures

    Get PDF
    People express emotions through different modalities. Integration of verbal and non-verbal communication channels creates a system in which the message is easier to understand. Expanding the focus to several expression forms can facilitate research on emotion recognition as well as human-machine interaction. In this article, the authors present a Polish emotional database composed of three modalities: facial expressions, body movement and gestures, and speech. The corpora contains recordings registered in studio conditions, acted out by 16 professional actors (8 male and 8 female). The data is labeled with six basic emotions categories, according to Ekman’s emotion categories. To check the quality of performance, all recordings are evaluated by experts and volunteers. The database is available to academic community and might be useful in the study on audio-visual emotion recognition

    Time domain analysis of electroencephalogram (EEG) signals for word level comprehension in deaf graduates with congenital and acquired hearing loss

    No full text
    Abstract Deafness can be classified on the basis of onset as congenital and acquired hearing loss. The brain is a sensitive part of our body, electrical pulses from the neurons interact with each other, generating brain signals. EEG signals are extensively used for clinical diagnosis for any brain anomalies, language comprehension and performance measurement studies. This study mainly focuses on analysing the word level comprehension in deaf adults in the age group (21 -25 years) using EEG signals. The raw EEG signals were pre-processed and the relevant time domain linear and nonlinear features were extracted and classified using machine learning algorithms. The approximate entropy feature was found to be best suited for finding the comprehension of both congenital and acquired deaf adults. This feature of ISL was observed to be achieving better classification rate with a maximum average accuracy of 96% in both congenital and acquired deaf adults using SVM classifier.</jats:p

    Recognition of valence using QRS complex in children with Autism Spectrum Disorder (ASD)

    No full text
    Abstract In children with a diagnosis of autism spectrum disorder (ASD), the absence of emotional inexpressiveness continues to be a consistent issue. This causes them to an unexpected emotional outbursts and meltdowns. This study utilizes the QRS complex derived from the Electrocardiogram (ECG) signals to investigate the positive (“Like”) and negative (“Dislike”) valence using a personalized emotion elicitation protocol using audio- and audio-visual cues in children who have ASD under the ages between 5 and 11 years. The sample size consisted of 15 controls and 15 children who have ASD.The acquired raw ECG signals are cleaned using various digital filters and the useful valence specific features are extracted from the QRS complex and are classified using the K nearest neighbour (KNN) and Ensemble learner. The control children were able to exhibit largely to the valence states, in contrast the children who have ASD did not show much difference in their valence states which was also verified using the Ensemble learner which achieved a maximum mean accuracy of 75.5 % in controls and 70.5% in children who have ASD.</jats:p

    ASYNCHRONOUS OFFSET STACKING (OS) SPREADED DATA LINK LAYER FOR WMSN UPLINK

    No full text
    corecore