161 research outputs found
Towards Learning Affective Body Gesture
Robots are assuming an increasingly important role in our society. They now become pets and help support children healing. In other words, they are now trying to entertain an active and affective communication with human agents. However, up to now, such systems have primarily relied on the human agents' ability to empathize with the system. Changes in the behavior of the system could therefore reult in changes of mood or behavior in the human partner. This paper describes experiments we carried out to study the importance of body language in affective communication. The results of the experiments led us to develop a system that can incrementally learn to recognize affective messages conveyed by body postures
Nose Heat: Exploring Stress-induced Nasal Thermal Variability through Mobile Thermal Imaging
Automatically monitoring and quantifying stress-induced thermal dynamic
information in real-world settings is an extremely important but challenging
problem. In this paper, we explore whether we can use mobile thermal imaging to
measure the rich physiological cues of mental stress that can be deduced from a
person's nose temperature. To answer this question we build i) a framework for
monitoring nasal thermal variable patterns continuously and ii) a novel set of
thermal variability metrics to capture a richness of the dynamic information.
We evaluated our approach in a series of studies including laboratory-based
psychosocial stress-induction tasks and real-world factory settings. We
demonstrate our approach has the potential for assessing stress responses
beyond controlled laboratory settings
Self-adversarial Multi-scale Contrastive Learning for Semantic Segmentation of Thermal Facial Images
Segmentation of thermal facial images is a challenging task. This is because
facial features often lack salience due to high-dynamic thermal range scenes
and occlusion issues. Limited availability of datasets from unconstrained
settings further limits the use of the state-of-the-art segmentation networks,
loss functions and learning strategies which have been built and validated for
RGB images. To address the challenge, we propose Self-Adversarial Multi-scale
Contrastive Learning (SAM-CL) framework as a new training strategy for thermal
image segmentation. SAM-CL framework consists of a SAM-CL loss function and a
thermal image augmentation (TiAug) module as a domain-specific augmentation
technique. We use the Thermal-Face-Database to demonstrate effectiveness of our
approach. Experiments conducted on the existing segmentation networks (UNET,
Attention-UNET, DeepLabV3 and HRNetv2) evidence the consistent performance
gains from the SAM-CL framework. Furthermore, we present a qualitative analysis
with UBComfort and DeepBreath datasets to discuss how our proposed methods
perform in handling unconstrained situations.Comment: Accepted at the British Machine Vision Conference (BMVC), 202
Time-delay neural network for continuous emotional dimension prediction from facial expression sequences
"(c) 2015 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other users, including reprinting/ republishing this material for advertising or promotional purposes, creating new collective works for resale or redistribution to servers or lists, or reuse of any copyrighted components of this work in other works."Automatic continuous affective state prediction from naturalistic facial expression is a very challenging research topic but very important in human-computer interaction. One of the main challenges is modeling the dynamics that characterize naturalistic expressions. In this paper, a novel two-stage automatic system is proposed to continuously predict affective dimension values from facial expression videos. In the first stage, traditional regression methods are used to classify each individual video frame, while in the second stage, a Time-Delay Neural Network (TDNN) is proposed to model the temporal relationships between
consecutive predictions. The two-stage approach separates the emotional state dynamics modeling from an individual emotional state prediction step based on input features. In doing so, the temporal information used by the TDNN is not biased by the high variability between features of consecutive frames and allows the network to more easily exploit the slow changing dynamics between emotional states. The system was fully tested and evaluated on three different facial expression video datasets. Our experimental results demonstrate that the use of a two-stage approach combined with the TDNN to take into account previously classified frames significantly improves the overall performance of continuous emotional state estimation in naturalistic
facial expressions. The proposed approach has won the affect recognition sub-challenge of the third international Audio/Visual Emotion Recognition Challenge (AVEC2013)1
- …