218 research outputs found
Group-Level Emotion Recognition Using a Unimodal Privacy-Safe Non-Individual Approach
This article presents our unimodal privacy-safe and non-individual proposal
for the audio-video group emotion recognition subtask at the Emotion
Recognition in the Wild (EmotiW) Challenge 2020 1. This sub challenge aims to
classify in the wild videos into three categories: Positive, Neutral and
Negative. Recent deep learning models have shown tremendous advances in
analyzing interactions between people, predicting human behavior and affective
evaluation. Nonetheless, their performance comes from individual-based
analysis, which means summing up and averaging scores from individual
detections, which inevitably leads to some privacy issues. In this research, we
investigated a frugal approach towards a model able to capture the global moods
from the whole image without using face or pose detection, or any
individual-based feature as input. The proposed methodology mixes
state-of-the-art and dedicated synthetic corpora as training sources. With an
in-depth exploration of neural network architectures for group-level emotion
recognition, we built a VGG-based model achieving 59.13% accuracy on the VGAF
test set (eleventh place of the challenge). Given that the analysis is unimodal
based only on global features and that the performance is evaluated on a
real-world dataset, these results are promising and let us envision extending
this model to multimodality for classroom ambiance evaluation, our final target
application
Towards A Robust Group-level Emotion Recognition via Uncertainty-Aware Learning
Group-level emotion recognition (GER) is an inseparable part of human
behavior analysis, aiming to recognize an overall emotion in a multi-person
scene. However, the existing methods are devoted to combing diverse emotion
cues while ignoring the inherent uncertainties under unconstrained
environments, such as congestion and occlusion occurring within a group.
Additionally, since only group-level labels are available, inconsistent emotion
predictions among individuals in one group can confuse the network. In this
paper, we propose an uncertainty-aware learning (UAL) method to extract more
robust representations for GER. By explicitly modeling the uncertainty of each
individual, we utilize stochastic embedding drawn from a Gaussian distribution
instead of deterministic point embedding. This representation captures the
probabilities of different emotions and generates diverse predictions through
this stochasticity during the inference stage. Furthermore,
uncertainty-sensitive scores are adaptively assigned as the fusion weights of
individuals' face within each group. Moreover, we develop an image enhancement
module to enhance the model's robustness against severe noise. The overall
three-branch model, encompassing face, object, and scene component, is guided
by a proportional-weighted fusion strategy and integrates the proposed
uncertainty-aware method to produce the final group-level output. Experimental
results demonstrate the effectiveness and generalization ability of our method
across three widely used databases.Comment: 11 pages,3 figure
EmotiW 2018: Audio-Video, Student Engagement and Group-Level Affect Prediction
This paper details the sixth Emotion Recognition in the Wild (EmotiW)
challenge. EmotiW 2018 is a grand challenge in the ACM International Conference
on Multimodal Interaction 2018, Colorado, USA. The challenge aims at providing
a common platform to researchers working in the affective computing community
to benchmark their algorithms on `in the wild' data. This year EmotiW contains
three sub-challenges: a) Audio-video based emotion recognition; b) Student
engagement prediction; and c) Group-level emotion recognition. The databases,
protocols and baselines are discussed in detail
Group-Level Emotion Recognition Using a Unimodal Privacy-Safe Non-Individual Approach
International audienceThis article presents our unimodal privacy-safe and non-individual proposal for the audio-video group emotion recognition subtask at the Emotion Recognition in the Wild (EmotiW) Challenge 2020 1. This sub challenge aims to classify in the wild videos into three categories: Positive, Neutral and Negative. Recent deep learning models have shown tremendous advances in analyzing interactions between people, predicting human behavior and affective evaluation. Nonetheless, their performance comes from individual-based analysis, which means summing up and averaging scores from individual detections, which inevitably leads to some privacy issues. In this research, we investigated a frugal approach towards a model able to capture the global moods from the whole image without using face or pose detection, or any individual-based feature as input. The proposed methodology mixes state-of-the-art and dedicated synthetic corpora as training sources. With an in-depth exploration of neural network architectures for group-level emotion recognition, we built a VGG-based model achieving 59.13% accuracy on the VGAF test set (eleventh place of the challenge). Given that the analysis is unimodal based only on global features and that the performance is evaluated on a real-world dataset, these results are promising and let us envision extending this model to multimodality for classroom ambiance evaluation, our final target application
Affective Image Content Analysis: Two Decades Review and New Perspectives
Images can convey rich semantics and induce various emotions in viewers.
Recently, with the rapid advancement of emotional intelligence and the
explosive growth of visual data, extensive research efforts have been dedicated
to affective image content analysis (AICA). In this survey, we will
comprehensively review the development of AICA in the recent two decades,
especially focusing on the state-of-the-art methods with respect to three main
challenges -- the affective gap, perception subjectivity, and label noise and
absence. We begin with an introduction to the key emotion representation models
that have been widely employed in AICA and description of available datasets
for performing evaluation with quantitative comparison of label noise and
dataset bias. We then summarize and compare the representative approaches on
(1) emotion feature extraction, including both handcrafted and deep features,
(2) learning methods on dominant emotion recognition, personalized emotion
prediction, emotion distribution learning, and learning from noisy data or few
labels, and (3) AICA based applications. Finally, we discuss some challenges
and promising research directions in the future, such as image content and
context understanding, group emotion clustering, and viewer-image interaction.Comment: Accepted by IEEE TPAM
Challenging Social Media Threats using Collective Well-being Aware Recommendation Algorithms and an Educational Virtual Companion
Social media (SM) have become an integral part of our lives, expanding our
inter-linking capabilities to new levels. There is plenty to be said about
their positive effects. On the other hand however, some serious negative
implications of SM have repeatedly been highlighted in recent years, pointing
at various SM threats for society, and its teenagers in particular: from common
issues (e.g. digital addiction and polarization) and manipulative influences of
algorithms to teenager-specific issues (e.g. body stereotyping). The full
impact of current SM platform design -- both at an individual and societal
level -- asks for a comprehensive evaluation and conceptual improvement. We
extend measures of Collective Well-Being (CWB) to SM communities. As users'
relationships and interactions are a central component of CWB, education is
crucial to improve CWB. We thus propose a framework based on an adaptive
"social media virtual companion" for educating and supporting the entire
students' community to interact with SM. The virtual companion will be powered
by a Recommender System (CWB-RS) that will optimize a CWB metric instead of
engagement or platform profit, which currently largely drives recommender
systems thereby disregarding any societal collateral effect. CWB-RS will
optimize CWB both in the short term, by balancing the level of SM threat the
students are exposed to, as well as in the long term, by adopting an
Intelligent Tutor System role and enabling adaptive and personalized sequencing
of playful learning activities. This framework offers an initial step on
understanding how to design SM systems and embedded educational interventions
that favor a more healthy and positive society
Image-set, Temporal and Spatiotemporal Representations of Videos for Recognizing, Localizing and Quantifying Actions
This dissertation addresses the problem of learning video representations, which is defined here as transforming the video so that its essential structure is made more visible or accessible for action recognition and quantification. In the literature, a video can be represented by a set of images, by modeling motion or temporal dynamics, and by a 3D graph with pixels as nodes. This dissertation contributes in proposing a set of models to localize, track, segment, recognize and assess actions such as (1) image-set models via aggregating subset features given by regularizing normalized CNNs, (2) image-set models via inter-frame principal recovery and sparsely coding residual actions, (3) temporally local models with spatially global motion estimated by robust feature matching and local motion estimated by action detection with motion model added, (4) spatiotemporal models 3D graph and 3D CNN to model time as a space dimension, (5) supervised hashing by jointly learning embedding and quantization, respectively. State-of-the-art performances are achieved for tasks such as quantifying facial pain and human diving. Primary conclusions of this dissertation are categorized as follows: (i) Image set can capture facial actions that are about collective representation; (ii) Sparse and low-rank representations can have the expression, identity and pose cues untangled and can be learned via an image-set model and also a linear model; (iii) Norm is related with recognizability; similarity metrics and loss functions matter; (v) Combining the MIL based boosting tracker with the Particle Filter motion model induces a good trade-off between the appearance similarity and motion consistence; (iv) Segmenting object locally makes it amenable to assign shape priors; it is feasible to learn knowledge such as shape priors online from Web data with weak supervision; (v) It works locally in both space and time to represent videos as 3D graphs; 3D CNNs work effectively when inputted with temporally meaningful clips; (vi) the rich labeled images or videos help to learn better hash functions after learning binary embedded codes than the random projections. In addition, models proposed for videos can be adapted to other sequential images such as volumetric medical images which are not included in this dissertation
Soft Biometric Analysis: MultiPerson and RealTime Pedestrian Attribute Recognition in Crowded Urban Environments
Traditionally, recognition systems were only based on human hard biometrics. However,
the ubiquitous CCTV cameras have raised the desire to analyze human biometrics from
far distances, without people attendance in the acquisition process. Highresolution
face closeshots
are rarely available at far distances such that facebased
systems cannot
provide reliable results in surveillance applications. Human soft biometrics such as body
and clothing attributes are believed to be more effective in analyzing human data collected
by security cameras.
This thesis contributes to the human soft biometric analysis in uncontrolled environments
and mainly focuses on two tasks: Pedestrian Attribute Recognition (PAR) and person reidentification
(reid).
We first review the literature of both tasks and highlight the history
of advancements, recent developments, and the existing benchmarks. PAR and person reid
difficulties are due to significant distances between intraclass
samples, which originate
from variations in several factors such as body pose, illumination, background, occlusion,
and data resolution. Recent stateoftheart
approaches present endtoend
models that
can extract discriminative and comprehensive feature representations from people. The
correlation between different regions of the body and dealing with limited learning data
is also the objective of many recent works. Moreover, class imbalance and correlation
between human attributes are specific challenges associated with the PAR problem.
We collect a large surveillance dataset to train a novel gender recognition model suitable
for uncontrolled environments. We propose a deep residual network that extracts several
posewise
patches from samples and obtains a comprehensive feature representation. In
the next step, we develop a model for multiple attribute recognition at once. Considering
the correlation between human semantic attributes and class imbalance, we respectively
use a multitask
model and a weighted loss function. We also propose a multiplication
layer on top of the backbone features extraction layers to exclude the background features
from the final representation of samples and draw the attention of the model to the
foreground area.
We address the problem of person reid
by implicitly defining the receptive fields of
deep learning classification frameworks. The receptive fields of deep learning models
determine the most significant regions of the input data for providing correct decisions.
Therefore, we synthesize a set of learning data in which the destructive regions (e.g.,
background) in each pair of instances are interchanged. A segmentation module
determines destructive and useful regions in each sample, and the label of synthesized
instances are inherited from the sample that shared the useful regions in the synthesized
image. The synthesized learning data are then used in the learning phase and help
the model rapidly learn that the identity and background regions are not correlated.
Meanwhile, the proposed solution could be seen as a data augmentation approach that
fully preserves the label information and is compatible with other data augmentation
techniques.
When reid
methods are learned in scenarios where the target person appears with identical garments in the gallery, the visual appearance of clothes is given the most
importance in the final feature representation. Clothbased
representations are not
reliable in the longterm
reid
settings as people may change their clothes. Therefore,
developing solutions that ignore clothing cues and focus on identityrelevant
features are
in demand. We transform the original data such that the identityrelevant
information of
people (e.g., face and body shape) are removed, while the identityunrelated
cues (i.e.,
color and texture of clothes) remain unchanged. A learned model on the synthesized
dataset predicts the identityunrelated
cues (shortterm
features). Therefore, we train a
second model coupled with the first model and learns the embeddings of the original data
such that the similarity between the embeddings of the original and synthesized data is
minimized. This way, the second model predicts based on the identityrelated
(longterm)
representation of people.
To evaluate the performance of the proposed models, we use PAR and person reid
datasets, namely BIODI, PETA, RAP, Market1501,
MSMTV2,
PRCC, LTCC, and MIT
and compared our experimental results with stateoftheart
methods in the field.
In conclusion, the data collected from surveillance cameras have low resolution, such
that the extraction of hard biometric features is not possible, and facebased
approaches
produce poor results. In contrast, soft biometrics are robust to variations in data quality.
So, we propose approaches both for PAR and person reid
to learn discriminative features
from each instance and evaluate our proposed solutions on several publicly available
benchmarks.This thesis was prepared at the University of Beria Interior, IT Instituto de Telecomunicações, Soft Computing and Image Analysis Laboratory (SOCIA Lab), Covilhã Delegation, and was submitted to the University of Beira Interior for defense in a public examination session
- …