11,502 research outputs found

    Emotion capture based on body postures and movements

    Full text link
    In this paper we present a preliminary study for designing interactive systems that are sensible to human emotions based on the body movements. To do so, we first review the literature on the various approaches for defining and characterizing human emotions. After justifying the adopted characterization space for emotions, we then focus on the movement characteristics that must be captured by the system for being able to recognize the human emotions.Comment: 22 page

    Classification of human motion based on affective state descriptors

    Get PDF
    Cataloged from PDF version of article.Human body movements and postures carry emotion-specific information. On the basis of this motivation, the objective of this study is to analyze this information in the spatial and temporal structure of the motion capture data and extract features that are indicative of certain emotions in terms of affective state descriptors. Our contribution comprises identifying the directly or indirectly related descriptors to emotion classification in human motion and conducting a comprehensive analysis of these descriptors (features) that fall into three different categories: posture descriptors, dynamic descriptors, and frequency-based descriptors in order to measure their performance with respect to predicting the affective state of an input motion. The classification results demonstrate that no single category is sufficient by itself; the best prediction performance is achieved when all categories are combined. Copyright © 2013 John Wiley & Sons, Ltd

    Positive/Negative Emotion Detection from RGB-D upper Body Images

    Get PDF
    International audienceThe ability to identify users'mental states represents a valu-able asset for improving human-computer interaction. Considering that spontaneous emotions are conveyed mostly through facial expressions and the upper Body movements, we propose to use these modalities together for the purpose of negative/positive emotion classification. A method that allows the recognition of mental states from videos is pro-posed. Based on a dataset composed with RGB-D movies a set of indic-tors of positive and negative is extracted from 2D (RGB) information. In addition, a geometric framework to model the depth flows and capture human body dynamics from depth data is proposed. Due to temporal changes in pixel and depth intensity which characterize spontaneous emo-tions dataset, the depth features are used to define the relation between changes in upper body movements and the affect. We describe a space of depth and texture information to detect the mood of people using upper body postures and their evolution across time. The experimentation has been performed on Cam3D dataset and has showed promising results

    An original framework for understanding human actions and body language by using deep neural networks

    Get PDF
    The evolution of both fields of Computer Vision (CV) and Artificial Neural Networks (ANNs) has allowed the development of efficient automatic systems for the analysis of people's behaviour. By studying hand movements it is possible to recognize gestures, often used by people to communicate information in a non-verbal way. These gestures can also be used to control or interact with devices without physically touching them. In particular, sign language and semaphoric hand gestures are the two foremost areas of interest due to their importance in Human-Human Communication (HHC) and Human-Computer Interaction (HCI), respectively. While the processing of body movements play a key role in the action recognition and affective computing fields. The former is essential to understand how people act in an environment, while the latter tries to interpret people's emotions based on their poses and movements; both are essential tasks in many computer vision applications, including event recognition, and video surveillance. In this Ph.D. thesis, an original framework for understanding Actions and body language is presented. The framework is composed of three main modules: in the first one, a Long Short Term Memory Recurrent Neural Networks (LSTM-RNNs) based method for the Recognition of Sign Language and Semaphoric Hand Gestures is proposed; the second module presents a solution based on 2D skeleton and two-branch stacked LSTM-RNNs for action recognition in video sequences; finally, in the last module, a solution for basic non-acted emotion recognition by using 3D skeleton and Deep Neural Networks (DNNs) is provided. The performances of RNN-LSTMs are explored in depth, due to their ability to model the long term contextual information of temporal sequences, making them suitable for analysing body movements. All the modules were tested by using challenging datasets, well known in the state of the art, showing remarkable results compared to the current literature methods

    Multimodal Observation and Interpretation of Subjects Engaged in Problem Solving

    Get PDF
    In this paper we present the first results of a pilot experiment in the capture and interpretation of multimodal signals of human experts engaged in solving challenging chess problems. Our goal is to investigate the extent to which observations of eye-gaze, posture, emotion and other physiological signals can be used to model the cognitive state of subjects, and to explore the integration of multiple sensor modalities to improve the reliability of detection of human displays of awareness and emotion. We observed chess players engaged in problems of increasing difficulty while recording their behavior. Such recordings can be used to estimate a participant's awareness of the current situation and to predict ability to respond effectively to challenging situations. Results show that a multimodal approach is more accurate than a unimodal one. By combining body posture, visual attention and emotion, the multimodal approach can reach up to 93% of accuracy when determining player's chess expertise while unimodal approach reaches 86%. Finally this experiment validates the use of our equipment as a general and reproducible tool for the study of participants engaged in screen-based interaction and/or problem solving

    The role of human body movements in mate selection

    Get PDF
    It is common scientific knowledge, that most of what we say within a conversation is not only expressed by the words meaning alone, but also through our gestures, postures, and body movements. This non-verbal mode is possibly rooted firmly in our human evolutionary heritage, and as such, some scientists argue that it serves as a fundamental assessment and expression tool for our inner qualities. Studies of nonverbal communication have established that a universal, culture-free, non-verbal sign system exists, that is available to all individuals for negotiating social encounters. Thus, it is not only the kind of gestures and expressions humans use in social communication, but also the way these movements are performed, as this seems to convey key information about an individuals quality. Dance, for example, is a special form of movement, which can be observed in human courtship displays. Recent research suggests that people are sensitive to the variation in dance movements, and that dance performance provides information about an individuals mate quality in terms of health and strength. This article reviews the role of body movement in human non-verbal communication, and highlights its significance in human mate preferences in order to promote future work in this research area within the evolutionary psychology framework

    Preface: Facial and Bodily Expressions for Control and Adaptation of Games

    Get PDF
    • …
    corecore