119 research outputs found
Towards Learning Affective Body Gesture
Robots are assuming an increasingly important role in our society. They now become pets and help support children healing. In other words, they are now trying to entertain an active and affective communication with human agents. However, up to now, such systems have primarily relied on the human agents' ability to empathize with the system. Changes in the behavior of the system could therefore reult in changes of mood or behavior in the human partner. This paper describes experiments we carried out to study the importance of body language in affective communication. The results of the experiments led us to develop a system that can incrementally learn to recognize affective messages conveyed by body postures
Self-adversarial Multi-scale Contrastive Learning for Semantic Segmentation of Thermal Facial Images
Segmentation of thermal facial images is a challenging task. This is because
facial features often lack salience due to high-dynamic thermal range scenes
and occlusion issues. Limited availability of datasets from unconstrained
settings further limits the use of the state-of-the-art segmentation networks,
loss functions and learning strategies which have been built and validated for
RGB images. To address the challenge, we propose Self-Adversarial Multi-scale
Contrastive Learning (SAM-CL) framework as a new training strategy for thermal
image segmentation. SAM-CL framework consists of a SAM-CL loss function and a
thermal image augmentation (TiAug) module as a domain-specific augmentation
technique. We use the Thermal-Face-Database to demonstrate effectiveness of our
approach. Experiments conducted on the existing segmentation networks (UNET,
Attention-UNET, DeepLabV3 and HRNetv2) evidence the consistent performance
gains from the SAM-CL framework. Furthermore, we present a qualitative analysis
with UBComfort and DeepBreath datasets to discuss how our proposed methods
perform in handling unconstrained situations.Comment: Accepted at the British Machine Vision Conference (BMVC), 202
Nose Heat: Exploring Stress-induced Nasal Thermal Variability through Mobile Thermal Imaging
Automatically monitoring and quantifying stress-induced thermal dynamic
information in real-world settings is an extremely important but challenging
problem. In this paper, we explore whether we can use mobile thermal imaging to
measure the rich physiological cues of mental stress that can be deduced from a
person's nose temperature. To answer this question we build i) a framework for
monitoring nasal thermal variable patterns continuously and ii) a novel set of
thermal variability metrics to capture a richness of the dynamic information.
We evaluated our approach in a series of studies including laboratory-based
psychosocial stress-induction tasks and real-world factory settings. We
demonstrate our approach has the potential for assessing stress responses
beyond controlled laboratory settings
Movement sonification expectancy model: leveraging musical expectancy theory to create movement-altering sonifications
When designing movement sonifications, their effect on people’s movement must be considered. Recent work has shown how real-time sonification can be designed to alter the way people move. However, the mechanisms through which these sonifications alter people’s expectations of their movement is not well explained. This is especially important when considering musical sonifications, to which people bring their own associations and musical expectation, and which can, in turn, alter their perception of the sonification. This paper presents a Movement Expectation Sonification Model, based on theories of motor-feedback and expectation, to explore how musical sonification can impact the way people perceive their movement. Secondly, we present a study that validates the predictions of this model by exploring how harmonic stability within sonification interacts with contextual cues in the environment to impact movement behaviour and perceptions. We show how musical expectancy can be built to either reward or encourage movement, and how such an effect is mediated through the presence of additional cues. This model offers a way for sonification designers to create movement sonifications that not only inform movement but can be used to encourage progress and reward successes
- …