332 research outputs found
Adversarial Training in Affective Computing and Sentiment Analysis: Recent Advances and Perspectives
Over the past few years, adversarial training has become an extremely active
research topic and has been successfully applied to various Artificial
Intelligence (AI) domains. As a potentially crucial technique for the
development of the next generation of emotional AI systems, we herein provide a
comprehensive overview of the application of adversarial training to affective
computing and sentiment analysis. Various representative adversarial training
algorithms are explained and discussed accordingly, aimed at tackling diverse
challenges associated with emotional AI systems. Further, we highlight a range
of potential future research directions. We expect that this overview will help
facilitate the development of adversarial training for affective computing and
sentiment analysis in both the academic and industrial communities
A Comprehensive Review of Data-Driven Co-Speech Gesture Generation
Gestures that accompany speech are an essential part of natural and efficient
embodied human communication. The automatic generation of such co-speech
gestures is a long-standing problem in computer animation and is considered an
enabling technology in film, games, virtual social spaces, and for interaction
with social robots. The problem is made challenging by the idiosyncratic and
non-periodic nature of human co-speech gesture motion, and by the great
diversity of communicative functions that gestures encompass. Gesture
generation has seen surging interest recently, owing to the emergence of more
and larger datasets of human gesture motion, combined with strides in
deep-learning-based generative models, that benefit from the growing
availability of data. This review article summarizes co-speech gesture
generation research, with a particular focus on deep generative models. First,
we articulate the theory describing human gesticulation and how it complements
speech. Next, we briefly discuss rule-based and classical statistical gesture
synthesis, before delving into deep learning approaches. We employ the choice
of input modalities as an organizing principle, examining systems that generate
gestures from audio, text, and non-linguistic input. We also chronicle the
evolution of the related training data sets in terms of size, diversity, motion
quality, and collection method. Finally, we identify key research challenges in
gesture generation, including data availability and quality; producing
human-like motion; grounding the gesture in the co-occurring speech in
interaction with other speakers, and in the environment; performing gesture
evaluation; and integration of gesture synthesis into applications. We
highlight recent approaches to tackling the various key challenges, as well as
the limitations of these approaches, and point toward areas of future
development.Comment: Accepted for EUROGRAPHICS 202
Reversible Graph Neural Network-based Reaction Distribution Learning for Multiple Appropriate Facial Reactions Generation
Generating facial reactions in a human-human dyadic interaction is complex
and highly dependent on the context since more than one facial reactions can be
appropriate for the speaker's behaviour. This has challenged existing machine
learning (ML) methods, whose training strategies enforce models to reproduce a
specific (not multiple) facial reaction from each input speaker behaviour. This
paper proposes the first multiple appropriate facial reaction generation
framework that re-formulates the one-to-many mapping facial reaction generation
problem as a one-to-one mapping problem. This means that we approach this
problem by considering the generation of a distribution of the listener's
appropriate facial reactions instead of multiple different appropriate facial
reactions, i.e., 'many' appropriate facial reaction labels are summarised as
'one' distribution label during training. Our model consists of a perceptual
processor, a cognitive processor, and a motor processor. The motor processor is
implemented with a novel Reversible Multi-dimensional Edge Graph Neural Network
(REGNN). This allows us to obtain a distribution of appropriate real facial
reactions during the training process, enabling the cognitive processor to be
trained to predict the appropriate facial reaction distribution. At the
inference stage, the REGNN decodes an appropriate facial reaction by using this
distribution as input. Experimental results demonstrate that our approach
outperforms existing models in generating more appropriate, realistic, and
synchronized facial reactions. The improved performance is largely attributed
to the proposed appropriate facial reaction distribution learning strategy and
the use of a REGNN. The code is available at
https://github.com/TongXu-05/REGNN-Multiple-Appropriate-Facial-Reaction-Generation
Conditional Adversarial Synthesis of 3D Facial Action Units
Employing deep learning-based approaches for fine-grained facial expression
analysis, such as those involving the estimation of Action Unit (AU)
intensities, is difficult due to the lack of a large-scale dataset of real
faces with sufficiently diverse AU labels for training. In this paper, we
consider how AU-level facial image synthesis can be used to substantially
augment such a dataset. We propose an AU synthesis framework that combines the
well-known 3D Morphable Model (3DMM), which intrinsically disentangles
expression parameters from other face attributes, with models that
adversarially generate 3DMM expression parameters conditioned on given target
AU labels, in contrast to the more conventional approach of generating facial
images directly. In this way, we are able to synthesize new combinations of
expression parameters and facial images from desired AU labels. Extensive
quantitative and qualitative results on the benchmark DISFA dataset demonstrate
the effectiveness of our method on 3DMM facial expression parameter synthesis
and data augmentation for deep learning-based AU intensity estimation
- …