1,289 research outputs found
Discriminatively Trained Latent Ordinal Model for Video Classification
We study the problem of video classification for facial analysis and human
action recognition. We propose a novel weakly supervised learning method that
models the video as a sequence of automatically mined, discriminative
sub-events (eg. onset and offset phase for "smile", running and jumping for
"highjump"). The proposed model is inspired by the recent works on Multiple
Instance Learning and latent SVM/HCRF -- it extends such frameworks to model
the ordinal aspect in the videos, approximately. We obtain consistent
improvements over relevant competitive baselines on four challenging and
publicly available video based facial analysis datasets for prediction of
expression, clinical pain and intent in dyadic conversations and on three
challenging human action datasets. We also validate the method with qualitative
results and show that they largely support the intuitions behind the method.Comment: Paper accepted in IEEE TPAMI. arXiv admin note: substantial text
overlap with arXiv:1604.0150
LOMo: Latent Ordinal Model for Facial Analysis in Videos
We study the problem of facial analysis in videos. We propose a novel weakly
supervised learning method that models the video event (expression, pain etc.)
as a sequence of automatically mined, discriminative sub-events (eg. onset and
offset phase for smile, brow lower and cheek raise for pain). The proposed
model is inspired by the recent works on Multiple Instance Learning and latent
SVM/HCRF- it extends such frameworks to model the ordinal or temporal aspect in
the videos, approximately. We obtain consistent improvements over relevant
competitive baselines on four challenging and publicly available video based
facial analysis datasets for prediction of expression, clinical pain and intent
in dyadic conversations. In combination with complimentary features, we report
state-of-the-art results on these datasets.Comment: 2016 IEEE Conference on Computer Vision and Pattern Recognition
(CVPR
Personalized Automatic Estimation of Self-reported Pain Intensity from Facial Expressions
Pain is a personal, subjective experience that is commonly evaluated through
visual analog scales (VAS). While this is often convenient and useful,
automatic pain detection systems can reduce pain score acquisition efforts in
large-scale studies by estimating it directly from the participants' facial
expressions. In this paper, we propose a novel two-stage learning approach for
VAS estimation: first, our algorithm employs Recurrent Neural Networks (RNNs)
to automatically estimate Prkachin and Solomon Pain Intensity (PSPI) levels
from face images. The estimated scores are then fed into the personalized
Hidden Conditional Random Fields (HCRFs), used to estimate the VAS, provided by
each person. Personalization of the model is performed using a newly introduced
facial expressiveness score, unique for each person. To the best of our
knowledge, this is the first approach to automatically estimate VAS from face
images. We show the benefits of the proposed personalized over traditional
non-personalized approach on a benchmark dataset for pain analysis from face
images.Comment: Computer Vision and Pattern Recognition Conference, The 1st
International Workshop on Deep Affective Learning and Context Modelin
Multiple Instance Learning for Emotion Recognition using Physiological Signals
The problem of continuous emotion recognition has been the subject of several studies. The proposed affective computing approaches employ sequential machine learning algorithms for improving the classification stage, accounting for the time ambiguity of emotional responses. Modeling and predicting the affective state over time is not a trivial problem because continuous data labeling is costly and not always feasible. This is a crucial issue in real-life applications, where data labeling is sparse and possibly captures only the most important events rather than the typical continuous subtle affective changes that occur. In this work, we introduce a framework from the machine learning literature called Multiple Instance Learning, which is able to model time intervals by capturing the presence or absence of relevant states, without the need to label the affective responses continuously (as required by standard sequential learning approaches). This choice offers a viable and natural solution for learning in a weakly supervised setting, taking into account the ambiguity of affective responses. We demonstrate the reliability of the proposed approach in a gold-standard scenario and towards real-world usage by employing an existing dataset (DEAP) and a purposely built one (Consumer). We also outline the advantages of this method with respect to standard supervised machine learning algorithms
Audio-Visual Fusion for Emotion Recognition in the Valence-Arousal Space Using Joint Cross-Attention
Automatic emotion recognition (ER) has recently gained lot of interest due to
its potential in many real-world applications. In this context, multimodal
approaches have been shown to improve performance (over unimodal approaches) by
combining diverse and complementary sources of information, providing some
robustness to noisy and missing modalities. In this paper, we focus on
dimensional ER based on the fusion of facial and vocal modalities extracted
from videos, where complementary audio-visual (A-V) relationships are explored
to predict an individual's emotional states in valence-arousal space. Most
state-of-the-art fusion techniques rely on recurrent networks or conventional
attention mechanisms that do not effectively leverage the complementary nature
of A-V modalities. To address this problem, we introduce a joint
cross-attentional model for A-V fusion that extracts the salient features
across A-V modalities, that allows to effectively leverage the inter-modal
relationships, while retaining the intra-modal relationships. In particular, it
computes the cross-attention weights based on correlation between the joint
feature representation and that of the individual modalities. By deploying the
joint A-V feature representation into the cross-attention module, it helps to
simultaneously leverage both the intra and inter modal relationships, thereby
significantly improving the performance of the system over the vanilla
cross-attention module. The effectiveness of our proposed approach is validated
experimentally on challenging videos from the RECOLA and AffWild2 datasets.
Results indicate that our joint cross-attentional A-V fusion model provides a
cost-effective solution that can outperform state-of-the-art approaches, even
when the modalities are noisy or absent.Comment: arXiv admin note: substantial text overlap with arXiv:2203.14779,
arXiv:2111.0522
Timestamp-supervised Wearable-based Activity Segmentation and Recognition with Contrastive Learning and Order-Preserving Optimal Transport
Human activity recognition (HAR) with wearables is one of the serviceable
technologies in ubiquitous and mobile computing applications. The
sliding-window scheme is widely adopted while suffering from the multi-class
windows problem. As a result, there is a growing focus on joint segmentation
and recognition with deep-learning methods, aiming at simultaneously dealing
with HAR and time-series segmentation issues. However, obtaining the full
activity annotations of wearable data sequences is resource-intensive or
time-consuming, while unsupervised methods yield poor performance. To address
these challenges, we propose a novel method for joint activity segmentation and
recognition with timestamp supervision, in which only a single annotated sample
is needed in each activity segment. However, the limited information of sparse
annotations exacerbates the gap between recognition and segmentation tasks,
leading to sub-optimal model performance. Therefore, the prototypes are
estimated by class-activation maps to form a sample-to-prototype contrast
module for well-structured embeddings. Moreover, with the optimal transport
theory, our approach generates the sample-level pseudo-labels that take
advantage of unlabeled data between timestamp annotations for further
performance improvement. Comprehensive experiments on four public HAR datasets
demonstrate that our model trained with timestamp supervision is superior to
the state-of-the-art weakly-supervised methods and achieves comparable
performance to the fully-supervised approaches.Comment: Under Review (submitted to IEEE TMC
- …