14,534 research outputs found

    Reduced emotion recognition from nonverbal cues in anorexia nervosa

    Get PDF
    Objective: Recent models of anorexia nervosa (AN) emphasise the role of reduced emotion recognition ability (ERA) in the development and maintenance of the disorder. However, methodological limitations impede conclusions from prior research. The current study tries to overcome these limitations by examining ERA with an audio-visual measure that focuses strictly on multimodal nonverbal cues and allows to differentiate between ERA for different emotion categories. Method: Forty women with AN and 40 healthy women completed the Geneva Emotion Recognition Test. This test includes 83 video clips in which 10 actors express 14 different emotions while saying a pseudo-linguistic sentence without semantic meaning. All clips contain multimodal nonverbal cues (i.e., prosody, facial expression, gestures, and posture). Results: Patients with AN showed poorer ERA than the healthy control group (d = 0.71), particularly regarding emotions of negative valence (d = 0.26). Furthermore, a lower body weight (r = 0.41) and longer illness duration (ρ = -0.32) were associated with poorer ERA in the AN group. Conclusions: Using an ecologically valid instrument, the findings of the study support illness models emphasising poor ERA in AN. Directly addressing ERA in the treatment of AN with targeted interventions may be promising. Keywords: eating disorder; emotion recognition; social cognition; socio-emotional processing; theory of mind

    Fusion of Learned Multi-Modal Representations and Dense Trajectories for Emotional Analysis in Videos

    Get PDF
    When designing a video affective content analysis algorithm, one of the most important steps is the selection of discriminative features for the effective representation of video segments. The majority of existing affective content analysis methods either use low-level audio-visual features or generate handcrafted higher level representations based on these low-level features. We propose in this work to use deep learning methods, in particular convolutional neural networks (CNNs), in order to automatically learn and extract mid-level representations from raw data. To this end, we exploit the audio and visual modality of videos by employing Mel-Frequency Cepstral Coefficients (MFCC) and color values in the HSV color space. We also incorporate dense trajectory based motion features in order to further enhance the performance of the analysis. By means of multi-class support vector machines (SVMs) and fusion mechanisms, music video clips are classified into one of four affective categories representing the four quadrants of the Valence-Arousal (VA) space. Results obtained on a subset of the DEAP dataset show (1) that higher level representations perform better than low-level features, and (2) that incorporating motion information leads to a notable performance gain, independently from the chosen representation

    How major depressive disorder affects the ability to decode multimodal dynamic emotional stimuli

    Get PDF
    Most studies investigating the processing of emotions in depressed patients reported impairments in the decoding of negative emotions. However, these studies adopted static stimuli (mostly stereotypical facial expressions corresponding to basic emotions) which do not reflect the way people experience emotions in everyday life. For this reason, this work proposes to investigate the decoding of emotional expressions in patients affected by Recurrent Major Depressive Disorder (RMDDs) using dynamic audio/video stimuli. RMDDs’ performance is compared with the performance of patients with Adjustment Disorder with Depressed Mood (ADs) and healthy (HCs) subjects. The experiments involve 27 RMDDs (16 with acute depression - RMDD-A, and 11 in a compensation phase - RMDD-C), 16 ADs and 16 HCs. The ability to decode emotional expressions is assessed through an emotion recognition task based on short audio (without video), video (without audio) and audio/video clips. The results show that AD patients are significantly less accurate than HCs in decoding fear, anger, happiness, surprise and sadness. RMDD-As with acute depression are significantly less accurate than HCs in decoding happiness, sadness and surprise. Finally, no significant differences were found between HCs and RMDD-Cs in a compensation phase. The different communication channels and the types of emotion play a significant role in limiting the decoding accuracy

    Automatic emotional state detection using facial expression dynamic in videos

    Get PDF
    In this paper, an automatic emotion detection system is built for a computer or machine to detect the emotional state from facial expressions in human computer communication. Firstly, dynamic motion features are extracted from facial expression videos and then advanced machine learning methods for classification and regression are used to predict the emotional states. The system is evaluated on two publicly available datasets, i.e. GEMEP_FERA and AVEC2013, and satisfied performances are achieved in comparison with the baseline results provided. With this emotional state detection capability, a machine can read the facial expression of its user automatically. This technique can be integrated into applications such as smart robots, interactive games and smart surveillance systems
    • 

    corecore