715 research outputs found

    Face and Body gesture recognition for a vision-based multimodal analyser

    Full text link
    users, computers should be able to recognize emotions, by analyzing the human's affective state, physiology and behavior. In this paper, we present a survey of research conducted on face and body gesture and recognition. In order to make human-computer interfaces truly natural, we need to develop technology that tracks human movement, body behavior and facial expression, and interprets these movements in an affective way. Accordingly in this paper, we present a framework for a vision-based multimodal analyzer that combines face and body gesture and further discuss relevant issues

    Machine Understanding of Human Behavior

    Get PDF
    A widely accepted prediction is that computing will move to the background, weaving itself into the fabric of our everyday living spaces and projecting the human user into the foreground. If this prediction is to come true, then next generation computing, which we will call human computing, should be about anticipatory user interfaces that should be human-centered, built for humans based on human models. They should transcend the traditional keyboard and mouse to include natural, human-like interactive functions including understanding and emulating certain human behaviors such as affective and social signaling. This article discusses a number of components of human behavior, how they might be integrated into computers, and how far we are from realizing the front end of human computing, that is, how far are we from enabling computers to understand human behavior

    Face and body gesture analysis for multimodal HCI

    Full text link
    Humans use their faces, hands and body as an integral part of their communication with others. For the computer to interact intelligently with human users, computers should be able to recognize emotions, by analyzing the human's affective state, physiology and behavior. Multimodal interfaces allow humans to interact with machines through multiple modalities such as speech, facial expression, gesture, and gaze. In this paper, we present an overview of research conducted on face and body gesture analysis and recognition. In order to make human-computer interfaces truly natural, we need to develop technology that tracks human movement, body behavior and facial expression, and interprets these movements in an affective way. Accordingly, in this paper we present a vision-based framework that combines face and body gesture for multimodal HCI. © Springer-Verlag Berlin Heidelberg 2004

    On the simulation of interactive non-verbal behaviour in virtual humans

    Get PDF
    Development of virtual humans has focused mainly in two broad areas - conversational agents and computer game characters. Computer game characters have traditionally been action-oriented - focused on the game-play - and conversational agents have been focused on sensible/intelligent conversation. While virtual humans have incorporated some form of non-verbal behaviour, this has been quite limited and more importantly not connected or connected very loosely with the behaviour of a real human interacting with the virtual human - due to a lack of sensor data and no system to respond to that data. The interactional aspect of non-verbal behaviour is highly important in human-human interactions and previous research has demonstrated that people treat media (and therefore virtual humans) as real people, and so interactive non-verbal behaviour is also important in the development of virtual humans. This paper presents the challenges in creating virtual humans that are non-verbally interactive and drawing corollaries with the development history of control systems in robotics presents some approaches to solving these challenges - specifically using behaviour based systems - and shows how an order of magnitude increase in response time of virtual humans in conversation can be obtained and that the development of rapidly responding non-verbal behaviours can start with just a few behaviours with more behaviours added without difficulty later in development

    Machine Analysis of Facial Expressions

    Get PDF
    No abstract

    A Multimodal Perception Framework for Users Emotional State Assessment in Social Robotics

    Get PDF
    In this work, we present an unobtrusive and non-invasive perception framework based on the synergy between two main acquisition systems: the Touch-Me Pad, consisting of two electronic patches for physiological signal extraction and processing; and the Scene Analyzer, a visual-auditory perception system specifically designed for the detection of social and emotional cues. It will be explained how the information extracted by this specific kind of framework is particularly suitable for social robotics applications and how the system has been conceived in order to be used in human-robot interaction scenarios

    Vision-based interaction within a multimodal framework

    Get PDF
    Our contribution is to the field of video-based interaction techniques and is integrated in the home environment of the EMBASSI project. This project addresses innovative methods of man-machine interaction achieved through the development of intelligent assistance and anthropomorphic user interfaces. Within this project, multimodal techniques represent a basic requirement, especially considering those related to the integration of modalities. We are using a stereoscopic approach to allow the natural selection of devices via pointing ges-tures. The pointing hand is segmented from the video images and the 3D position and orientation of the forefinger is calculated. This modality has a subsequent integration with that of speech, in the context of a multimodal interaction infrastructure. In a first phase, we use semantic fusion with amodal input, considering the modalities in a so-called late fusion state

    Fusing face and body gesture for machine recognition of emotions

    Full text link
    Research shows that humans are more likely to consider computers to be human-like when those computers understand and display appropriate nonverbal communicative behavior. Most of the existing systems attempting to analyze the human nonverbal behavior focus only on the face; research that aims to integrate gesture as an expression mean has only recently emerged. This paper presents an approach to automatic visual recognition of expressive face and upper body action units (FAUs and BAUs) suitable for use in a vision-based affective multimodal framework. After describing the feature extraction techniques, classification results from three subjects are presented. Firstly, individual classifiers are trained separately with face and body features for classification into FAU and BAU categories. Secondly, the same procedure is applied for classification into labeled emotion categories. Finally, we fuse face and body information for classification into combined emotion categories. In our experiments, the emotion classification using the two modalities achieved a better recognition accuracy outperforming the classification using the individual face modality. © 2005 IEEE
    • …
    corecore