2,094 research outputs found

    Deep fusion of multi-channel neurophysiological signal for emotion recognition and monitoring

    Get PDF
    How to fuse multi-channel neurophysiological signals for emotion recognition is emerging as a hot research topic in community of Computational Psychophysiology. Nevertheless, prior feature engineering based approaches require extracting various domain knowledge related features at a high time cost. Moreover, traditional fusion method cannot fully utilise correlation information between different channels and frequency components. In this paper, we design a hybrid deep learning model, in which the 'Convolutional Neural Network (CNN)' is utilised for extracting task-related features, as well as mining inter-channel and inter-frequency correlation, besides, the 'Recurrent Neural Network (RNN)' is concatenated for integrating contextual information from the frame cube sequence. Experiments are carried out in a trial-level emotion recognition task, on the DEAP benchmarking dataset. Experimental results demonstrate that the proposed framework outperforms the classical methods, with regard to both of the emotional dimensions of Valence and Arousal

    Feature extraction based on bio-inspired model for robust emotion recognition

    Get PDF
    Emotional state identification is an important issue to achieve more natural speech interactive systems. Ideally, these systems should also be able to work in real environments in which generally exist some kind of noise. Several bio-inspired representations have been applied to artificial systems for speech processing under noise conditions. In this work, an auditory signal representation is used to obtain a novel bio-inspired set of features for emotional speech signals. These characteristics, together with other spectral and prosodic features, are used for emotion recognition under noise conditions. Neural models were trained as classifiers and results were compared to the well-known mel-frequency cepstral coefficients. Results show that using the proposed representations, it is possible to significantly improve the robustness of an emotion recognition system. The results were also validated in a speaker independent scheme and with two emotional speech corpora.Fil: Albornoz, Enrique Marcelo. Consejo Nacional de Investigaciones Científicas y Técnicas. Centro Científico Tecnológico Conicet - Santa Fe. Instituto de Investigación en Señales, Sistemas e Inteligencia Computacional. Universidad Nacional del Litoral. Facultad de Ingeniería y Ciencias Hídricas. Instituto de Investigación en Señales, Sistemas e Inteligencia Computacional; ArgentinaFil: Milone, Diego Humberto. Consejo Nacional de Investigaciones Científicas y Técnicas. Centro Científico Tecnológico Conicet - Santa Fe. Instituto de Investigación en Señales, Sistemas e Inteligencia Computacional. Universidad Nacional del Litoral. Facultad de Ingeniería y Ciencias Hídricas. Instituto de Investigación en Señales, Sistemas e Inteligencia Computacional; ArgentinaFil: Rufiner, Hugo Leonardo. Consejo Nacional de Investigaciones Científicas y Técnicas. Centro Científico Tecnológico Conicet - Santa Fe. Instituto de Investigación en Señales, Sistemas e Inteligencia Computacional. Universidad Nacional del Litoral. Facultad de Ingeniería y Ciencias Hídricas. Instituto de Investigación en Señales, Sistemas e Inteligencia Computacional; Argentin

    Early detection of dysphoria using electroencephalogram affective modelling

    Get PDF
    Dysphoria is a trigger point for maladjusted individuals who cannot cope with disappointments and crushed expectations, resulting in negative emotions if it is not detected early. Individuals who suffer from dysphoria tend to deny their mental state. They try to hide, suppress, or ignore the symptoms, making one feel worse, unwanted, and unloved. Psychologists and psychiatrists identify dysphoria using standardized instruments like questionnaires and interviews. These methods can boast a high success rate. However, the limited number of trained psychologists and psychiatrists and the small number of health institutions focused on mental health limit access to early detection. In addition, the negative connotation and taboo about dysphoria discourage the public from openly seeking help. An alternative approach to collecting ‘pure’ data is proposed in this paper. The brain signals are captured using the electroencephalogram as the input to the machine learning approach to detect negative emotions. It was observed from the experimental results that participants who scored severe dysphoria recorded ‘fear’ emotion even before stimuli were presented during the eyes-close phase. This finding is crucial to further understanding the effect of dysphoria and can be used to study the correlation between dysphoria and negative emotions

    ACII 2009: Affective Computing and Intelligent Interaction. Proceedings of the Doctoral Consortium 2009

    Get PDF

    Multimodal Emotion Recognition Model using Physiological Signals

    Full text link
    As an important field of research in Human-Machine Interactions, emotion recognition based on physiological signals has become research hotspots. Motivated by the outstanding performance of deep learning approaches in recognition tasks, we proposed a Multimodal Emotion Recognition Model that consists of a 3D convolutional neural network model, a 1D convolutional neural network model and a biologically inspired multimodal fusion model which integrates multimodal information on the decision level for emotion recognition. We use this model to classify four emotional regions from the arousal valence plane, i.e., low arousal and low valence (LALV), high arousal and low valence (HALV), low arousal and high valence (LAHV) and high arousal and high valence (HAHV) in the DEAP and AMIGOS dataset. The 3D CNN model and 1D CNN model are used for emotion recognition based on electroencephalogram (EEG) signals and peripheral physiological signals respectively, and get the accuracy of 93.53% and 95.86% with the original EEG signals in these two datasets. Compared with the single-modal recognition, the multimodal fusion model improves the accuracy of emotion recognition by 5% ~ 25%, and the fusion result of EEG signals (decomposed into four frequency bands) and peripheral physiological signals get the accuracy of 95.77%, 97.27% and 91.07%, 99.74% in these two datasets respectively. Integrated EEG signals and peripheral physiological signals, this model could reach the highest accuracy about 99% in both datasets which shows that our proposed method demonstrates certain advantages in solving the emotion recognition tasks.Comment: 10 pages, 10 figures, 6 table

    A LightGBM-Based EEG Analysis Method for Driver Mental States Classification

    Get PDF
    Fatigue driving can easily lead to road traffic accidents and bring great harm to individuals and families. Recently, electroencephalography- (EEG-) based physiological and brain activities for fatigue detection have been increasingly investigated. However, how to find an effective method or model to timely and efficiently detect the mental states of drivers still remains a challenge. In this paper, we combine common spatial pattern (CSP) and propose a light-weighted classifier, LightFD, which is based on gradient boosting framework for EEG mental states identification. ,e comparable results with traditional classifiers, such as support vector machine (SVM), convolutional neural network (CNN), gated recurrent unit (GRU), and large margin nearest neighbor (LMNN), show that the proposed model could achieve better classification performance, as well as the decision efficiency. Furthermore, we also test and validate that LightFD has better transfer learning performance in EEG classification of driver mental states. In summary, our proposed LightFD classifier has better performance in real-time EEG mental state prediction, and it is expected to have broad application prospects in practical brain-computer interaction (BCI)
    • …
    corecore