4,356 research outputs found
Deep fusion of multi-channel neurophysiological signal for emotion recognition and monitoring
How to fuse multi-channel neurophysiological signals for emotion recognition is emerging as a hot research topic in community of Computational Psychophysiology. Nevertheless, prior feature engineering based approaches require extracting various domain knowledge related features at a high time cost. Moreover, traditional fusion method cannot fully utilise correlation information between different channels and frequency components. In this paper, we design a hybrid deep learning model, in which the 'Convolutional Neural Network (CNN)' is utilised for extracting task-related features, as well as mining inter-channel and inter-frequency correlation, besides, the 'Recurrent Neural Network (RNN)' is concatenated for integrating contextual information from the frame cube sequence. Experiments are carried out in a trial-level emotion recognition task, on the DEAP benchmarking dataset. Experimental results demonstrate that the proposed framework outperforms the classical methods, with regard to both of the emotional dimensions of Valence and Arousal
An Automated System for Epilepsy Detection using EEG Brain Signals based on Deep Learning Approach
Epilepsy is a neurological disorder and for its detection, encephalography
(EEG) is a commonly used clinical approach. Manual inspection of EEG brain
signals is a time-consuming and laborious process, which puts heavy burden on
neurologists and affects their performance. Several automatic techniques have
been proposed using traditional approaches to assist neurologists in detecting
binary epilepsy scenarios e.g. seizure vs. non-seizure or normal vs. ictal.
These methods do not perform well when classifying ternary case e.g. ictal vs.
normal vs. inter-ictal; the maximum accuracy for this case by the
state-of-the-art-methods is 97+-1%. To overcome this problem, we propose a
system based on deep learning, which is an ensemble of pyramidal
one-dimensional convolutional neural network (P-1D-CNN) models. In a CNN model,
the bottleneck is the large number of learnable parameters. P-1D-CNN works on
the concept of refinement approach and it results in 60% fewer parameters
compared to traditional CNN models. Further to overcome the limitations of
small amount of data, we proposed augmentation schemes for learning P-1D-CNN
model. In almost all the cases concerning epilepsy detection, the proposed
system gives an accuracy of 99.1+-0.9% on the University of Bonn dataset.Comment: 18 page
Multi-modal Approach for Affective Computing
Throughout the past decade, many studies have classified human emotions using
only a single sensing modality such as face video, electroencephalogram (EEG),
electrocardiogram (ECG), galvanic skin response (GSR), etc. The results of
these studies are constrained by the limitations of these modalities such as
the absence of physiological biomarkers in the face-video analysis, poor
spatial resolution in EEG, poor temporal resolution of the GSR etc. Scant
research has been conducted to compare the merits of these modalities and
understand how to best use them individually and jointly. Using multi-modal
AMIGOS dataset, this study compares the performance of human emotion
classification using multiple computational approaches applied to face videos
and various bio-sensing modalities. Using a novel method for compensating
physiological baseline we show an increase in the classification accuracy of
various approaches that we use. Finally, we present a multi-modal
emotion-classification approach in the domain of affective computing research.Comment: Published in IEEE 40th International Engineering in Medicine and
Biology Conference (EMBC) 201
Exploring EEG Features in Cross-Subject Emotion Recognition
Recognizing cross-subject emotions based on brain imaging data, e.g., EEG, has always been difficult due to the poor generalizability of features across subjects. Thus, systematically exploring the ability of different EEG features to identify emotional information across subjects is crucial. Prior related work has explored this question based only on one or two kinds of features, and different findings and conclusions have been presented. In this work, we aim at a more comprehensive investigation on this question with a wider range of feature types, including 18 kinds of linear and non-linear EEG features. The effectiveness of these features was examined on two publicly accessible datasets, namely, the dataset for emotion analysis using physiological signals (DEAP) and the SJTU emotion EEG dataset (SEED). We adopted the support vector machine (SVM) approach and the "leave-one-subject-out" verification strategy to evaluate recognition performance. Using automatic feature selection methods, the highest mean recognition accuracy of 59.06% (AUC = 0.605) on the DEAP dataset and of 83.33% (AUC = 0.904) on the SEED dataset were reached. Furthermore, using manually operated feature selection on the SEED dataset, we explored the importance of different EEG features in cross-subject emotion recognition from multiple perspectives, including different channels, brain regions, rhythms, and feature types. For example, we found that the Hjorth parameter of mobility in the beta rhythm achieved the best mean recognition accuracy compared to the other features. Through a pilot correlation analysis, we further examined the highly correlated features, for a better understanding of the implications hidden in those features that allow for differentiating cross-subject emotions. Various remarkable observations have been made. The results of this paper validate the possibility of exploring robust EEG features in cross-subject emotion recognition
- …