2 research outputs found
Exploring the contextual factors affecting multimodal emotion recognition in videos
Emotional expressions form a key part of user behavior on today's digital
platforms. While multimodal emotion recognition techniques are gaining research
attention, there is a lack of deeper understanding on how visual and non-visual
features can be used to better recognize emotions in certain contexts, but not
others. This study analyzes the interplay between the effects of multimodal
emotion features derived from facial expressions, tone and text in conjunction
with two key contextual factors: i) gender of the speaker, and ii) duration of
the emotional episode. Using a large public dataset of 2,176 manually annotated
YouTube videos, we found that while multimodal features consistently
outperformed bimodal and unimodal features, their performance varied
significantly across different emotions, gender and duration contexts.
Multimodal features performed particularly better for male speakers in
recognizing most emotions. Furthermore, multimodal features performed
particularly better for shorter than for longer videos in recognizing neutral
and happiness, but not sadness and anger. These findings offer new insights
towards the development of more context-aware emotion recognition and
empathetic systems.Comment: Accepted version at IEEE Transactions on Affective Computin
A Blast From the Past: Personalizing Predictions of Video-Induced Emotions using Personal Memories as Context
A key challenge in the accurate prediction of viewers' emotional responses to
video stimuli in real-world applications is accounting for person- and
situation-specific variation. An important contextual influence shaping
individuals' subjective experience of a video is the personal memories that it
triggers in them. Prior research has found that this memory influence explains
more variation in video-induced emotions than other contextual variables
commonly used for personalizing predictions, such as viewers' demographics or
personality. In this article, we show that (1) automatic analysis of text
describing their video-triggered memories can account for variation in viewers'
emotional responses, and (2) that combining such an analysis with that of a
video's audiovisual content enhances the accuracy of automatic predictions. We
discuss the relevance of these findings for improving on state of the art
approaches to automated affective video analysis in personalized contexts