873 research outputs found
The Many Moods of Emotion
This paper presents a novel approach to the facial expression generation
problem. Building upon the assumption of the psychological community that
emotion is intrinsically continuous, we first design our own continuous emotion
representation with a 3-dimensional latent space issued from a neural network
trained on discrete emotion classification. The so-obtained representation can
be used to annotate large in the wild datasets and later used to trained a
Generative Adversarial Network. We first show that our model is able to map
back to discrete emotion classes with a objectively and subjectively better
quality of the images than usual discrete approaches. But also that we are able
to pave the larger space of possible facial expressions, generating the many
moods of emotion. Moreover, two axis in this space may be found to generate
similar expression changes as in traditional continuous representations such as
arousal-valence. Finally we show from visual interpretation, that the third
remaining dimension is highly related to the well-known dominance dimension
from psychology
Adversarial Training in Affective Computing and Sentiment Analysis: Recent Advances and Perspectives
Over the past few years, adversarial training has become an extremely active
research topic and has been successfully applied to various Artificial
Intelligence (AI) domains. As a potentially crucial technique for the
development of the next generation of emotional AI systems, we herein provide a
comprehensive overview of the application of adversarial training to affective
computing and sentiment analysis. Various representative adversarial training
algorithms are explained and discussed accordingly, aimed at tackling diverse
challenges associated with emotional AI systems. Further, we highlight a range
of potential future research directions. We expect that this overview will help
facilitate the development of adversarial training for affective computing and
sentiment analysis in both the academic and industrial communities
Emotion Recognition by Video: A review
Video emotion recognition is an important branch of affective computing, and
its solutions can be applied in different fields such as human-computer
interaction (HCI) and intelligent medical treatment. Although the number of
papers published in the field of emotion recognition is increasing, there are
few comprehensive literature reviews covering related research on video emotion
recognition. Therefore, this paper selects articles published from 2015 to 2023
to systematize the existing trends in video emotion recognition in related
studies. In this paper, we first talk about two typical emotion models, then we
talk about databases that are frequently utilized for video emotion
recognition, including unimodal databases and multimodal databases. Next, we
look at and classify the specific structure and performance of modern unimodal
and multimodal video emotion recognition methods, talk about the benefits and
drawbacks of each, and then we compare them in detail in the tables. Further,
we sum up the primary difficulties right now looked by video emotion
recognition undertakings and point out probably the most encouraging future
headings, such as establishing an open benchmark database and better multimodal
fusion strategys. The essential objective of this paper is to assist scholarly
and modern scientists with keeping up to date with the most recent advances and
new improvements in this speedy, high-influence field of video emotion
recognition
Reading Between the Frames: Multi-Modal Depression Detection in Videos from Non-Verbal Cues
Depression, a prominent contributor to global disability, affects a
substantial portion of the population. Efforts to detect depression from social
media texts have been prevalent, yet only a few works explored depression
detection from user-generated video content. In this work, we address this
research gap by proposing a simple and flexible multi-modal temporal model
capable of discerning non-verbal depression cues from diverse modalities in
noisy, real-world videos. We show that, for in-the-wild videos, using
additional high-level non-verbal cues is crucial to achieving good performance,
and we extracted and processed audio speech embeddings, face emotion
embeddings, face, body and hand landmarks, and gaze and blinking information.
Through extensive experiments, we show that our model achieves state-of-the-art
results on three key benchmark datasets for depression detection from video by
a substantial margin. Our code is publicly available on GitHub.Comment: Accepted at 46th European Conference on Information Retrieval (ECIR
2024
MoFA: Model-based Deep Convolutional Face Autoencoder for Unsupervised Monocular Reconstruction
In this work we propose a novel model-based deep convolutional autoencoder
that addresses the highly challenging problem of reconstructing a 3D human face
from a single in-the-wild color image. To this end, we combine a convolutional
encoder network with an expert-designed generative model that serves as
decoder. The core innovation is our new differentiable parametric decoder that
encapsulates image formation analytically based on a generative model. Our
decoder takes as input a code vector with exactly defined semantic meaning that
encodes detailed face pose, shape, expression, skin reflectance and scene
illumination. Due to this new way of combining CNN-based with model-based face
reconstruction, the CNN-based encoder learns to extract semantically meaningful
parameters from a single monocular input image. For the first time, a CNN
encoder and an expert-designed generative model can be trained end-to-end in an
unsupervised manner, which renders training on very large (unlabeled) real
world data feasible. The obtained reconstructions compare favorably to current
state-of-the-art approaches in terms of quality and richness of representation.Comment: International Conference on Computer Vision (ICCV) 2017 (Oral), 13
page
M3ER: Multiplicative Multimodal Emotion Recognition Using Facial, Textual, and Speech Cues
We present M3ER, a learning-based method for emotion recognition from
multiple input modalities. Our approach combines cues from multiple
co-occurring modalities (such as face, text, and speech) and also is more
robust than other methods to sensor noise in any of the individual modalities.
M3ER models a novel, data-driven multiplicative fusion method to combine the
modalities, which learn to emphasize the more reliable cues and suppress others
on a per-sample basis. By introducing a check step which uses Canonical
Correlational Analysis to differentiate between ineffective and effective
modalities, M3ER is robust to sensor noise. M3ER also generates proxy features
in place of the ineffectual modalities. We demonstrate the efficiency of our
network through experimentation on two benchmark datasets, IEMOCAP and
CMU-MOSEI. We report a mean accuracy of 82.7% on IEMOCAP and 89.0% on
CMU-MOSEI, which, collectively, is an improvement of about 5% over prior work
- …