4,104 research outputs found
Group-Level Emotion Recognition Using a Unimodal Privacy-Safe Non-Individual Approach
This article presents our unimodal privacy-safe and non-individual proposal
for the audio-video group emotion recognition subtask at the Emotion
Recognition in the Wild (EmotiW) Challenge 2020 1. This sub challenge aims to
classify in the wild videos into three categories: Positive, Neutral and
Negative. Recent deep learning models have shown tremendous advances in
analyzing interactions between people, predicting human behavior and affective
evaluation. Nonetheless, their performance comes from individual-based
analysis, which means summing up and averaging scores from individual
detections, which inevitably leads to some privacy issues. In this research, we
investigated a frugal approach towards a model able to capture the global moods
from the whole image without using face or pose detection, or any
individual-based feature as input. The proposed methodology mixes
state-of-the-art and dedicated synthetic corpora as training sources. With an
in-depth exploration of neural network architectures for group-level emotion
recognition, we built a VGG-based model achieving 59.13% accuracy on the VGAF
test set (eleventh place of the challenge). Given that the analysis is unimodal
based only on global features and that the performance is evaluated on a
real-world dataset, these results are promising and let us envision extending
this model to multimodality for classroom ambiance evaluation, our final target
application
Timing is everything: A spatio-temporal approach to the analysis of facial actions
This thesis presents a fully automatic facial expression analysis system based on the Facial Action
Coding System (FACS). FACS is the best known and the most commonly used system to describe
facial activity in terms of facial muscle actions (i.e., action units, AUs). We will present our research
on the analysis of the morphological, spatio-temporal and behavioural aspects of facial expressions.
In contrast with most other researchers in the field who use appearance based techniques, we use a
geometric feature based approach. We will argue that that approach is more suitable for analysing
facial expression temporal dynamics. Our system is capable of explicitly exploring the temporal
aspects of facial expressions from an input colour video in terms of their onset (start), apex (peak)
and offset (end).
The fully automatic system presented here detects 20 facial points in the first frame and tracks them
throughout the video. From the tracked points we compute geometry-based features which serve as
the input to the remainder of our systems. The AU activation detection system uses GentleBoost
feature selection and a Support Vector Machine (SVM) classifier to find which AUs were present in an
expression. Temporal dynamics of active AUs are recognised by a hybrid GentleBoost-SVM-Hidden
Markov model classifier. The system is capable of analysing 23 out of 27 existing AUs with high
accuracy.
The main contributions of the work presented in this thesis are the following: we have created a
method for fully automatic AU analysis with state-of-the-art recognition results. We have proposed
for the first time a method for recognition of the four temporal phases of an AU. We have build the
largest comprehensive database of facial expressions to date. We also present for the first time in the
literature two studies for automatic distinction between posed and spontaneous expressions
- …