9,382 research outputs found

    A dataset of continuous affect annotations and physiological signals for emotion analysis

    Get PDF
    From a computational viewpoint, emotions continue to be intriguingly hard to understand. In research, direct, real-time inspection in realistic settings is not possible. Discrete, indirect, post-hoc recordings are therefore the norm. As a result, proper emotion assessment remains a problematic issue. The Continuously Annotated Signals of Emotion (CASE) dataset provides a solution as it focusses on real-time continuous annotation of emotions, as experienced by the participants, while watching various videos. For this purpose, a novel, intuitive joystick-based annotation interface was developed, that allowed for simultaneous reporting of valence and arousal, that are instead often annotated independently. In parallel, eight high quality, synchronized physiological recordings (1000 Hz, 16-bit ADC) were made of ECG, BVP, EMG (3x), GSR (or EDA), respiration and skin temperature. The dataset consists of the physiological and annotation data from 30 participants, 15 male and 15 female, who watched several validated video-stimuli. The validity of the emotion induction, as exemplified by the annotation and physiological data, is also presented.Comment: Dataset available at: https://rmc.dlr.de/download/CASE_dataset/CASE_dataset.zi

    Exploring Emotion Recognition for VR-EBT Using Deep Learning on a Multimodal Physiological Framework

    Get PDF
    Post-Traumatic Stress Disorder is a mental health condition that affects a growing number of people. A variety of PTSD treatment methods exist, however current research indicates that virtual reality exposure-based treatment has become more prominent in its use.Yet the treatment method can be costly and time consuming for clinicians and ultimately for the healthcare system. PTSD can be delivered in a more sustainable way using virtual reality. This is accomplished by using machine learning to autonomously adapt virtual reality scene changes. The use of machine learning will also support a more efficient way of inserting positive stimuli in virtual reality scenes. Machine learning has been used in medical areas such as rare diseases, oncology, medical data classification and psychiatry. This research used a public dataset that contained physiological recordings and emotional responses. The dataset was used to train a deep neural network, and a convolutional neural network to predict an individual’s valence, arousal and dominance. The results presented indicate that the deep neural network had the highest overall mean bounded regression accuracy and the lowest computational time

    How Does the Body Affect the Mind? Role of Cardiorespiratory Coherence in the Spectrum of Emotions

    Get PDF
    The brain is considered to be the primary generator and regulator of emotions; however, afferent signals originating throughout the body are detected by the autonomic nervous system (ANS) and brainstem, and, in turn, can modulate emotional processes. During stress and negative emotional states, levels of cardiorespiratory coherence (CRC) decrease, and a shift occurs toward sympathetic dominance. In contrast, CRC levels increase during more positive emotional states, and a shift occurs toward parasympathetic dominance. Te dynamic changes in CRC that accompany different emotions can provide insights into how the activity of the limbic system and afferent feedback manifest as emotions. The authors propose that the brainstem and CRC are involved in important feedback mechanisms that modulate emotions and higher cortical areas. That mechanism may be one of many mechanisms that underlie the physiological and neurological changes that are experienced during pranayama and meditation and may support the use of those techniques to treat various mood disorders and reduce stress

    Deep fusion of multi-channel neurophysiological signal for emotion recognition and monitoring

    Get PDF
    How to fuse multi-channel neurophysiological signals for emotion recognition is emerging as a hot research topic in community of Computational Psychophysiology. Nevertheless, prior feature engineering based approaches require extracting various domain knowledge related features at a high time cost. Moreover, traditional fusion method cannot fully utilise correlation information between different channels and frequency components. In this paper, we design a hybrid deep learning model, in which the 'Convolutional Neural Network (CNN)' is utilised for extracting task-related features, as well as mining inter-channel and inter-frequency correlation, besides, the 'Recurrent Neural Network (RNN)' is concatenated for integrating contextual information from the frame cube sequence. Experiments are carried out in a trial-level emotion recognition task, on the DEAP benchmarking dataset. Experimental results demonstrate that the proposed framework outperforms the classical methods, with regard to both of the emotional dimensions of Valence and Arousal
    • …
    corecore