1,508 research outputs found
Affective Facial Expression Processing via Simulation: A Probabilistic Model
Understanding the mental state of other people is an important skill for
intelligent agents and robots to operate within social environments. However,
the mental processes involved in `mind-reading' are complex. One explanation of
such processes is Simulation Theory - it is supported by a large body of
neuropsychological research. Yet, determining the best computational model or
theory to use in simulation-style emotion detection, is far from being
understood.
In this work, we use Simulation Theory and neuroscience findings on
Mirror-Neuron Systems as the basis for a novel computational model, as a way to
handle affective facial expressions. The model is based on a probabilistic
mapping of observations from multiple identities onto a single fixed identity
(`internal transcoding of external stimuli'), and then onto a latent space
(`phenomenological response'). Together with the proposed architecture we
present some promising preliminary resultsComment: Annual International Conference on Biologically Inspired Cognitive
Architectures - BICA 201
Development of an Autonomous Visual Perception System for Robots Using Object-Based Visual Attention
Robot in the mirror: toward an embodied computational model of mirror self-recognition
Self-recognition or self-awareness is a capacity attributed typically only to
humans and few other species. The definitions of these concepts vary and little
is known about the mechanisms behind them. However, there is a Turing test-like
benchmark: the mirror self-recognition, which consists in covertly putting a
mark on the face of the tested subject, placing her in front of a mirror, and
observing the reactions. In this work, first, we provide a mechanistic
decomposition, or process model, of what components are required to pass this
test. Based on these, we provide suggestions for empirical research. In
particular, in our view, the way the infants or animals reach for the mark
should be studied in detail. Second, we develop a model to enable the humanoid
robot Nao to pass the test. The core of our technical contribution is learning
the appearance representation and visual novelty detection by means of learning
the generative model of the face with deep auto-encoders and exploiting the
prediction error. The mark is identified as a salient region on the face and
reaching action is triggered, relying on a previously learned mapping to arm
joint angles. The architecture is tested on two robots with a completely
different face.Comment: To appear in KI - K\"unstliche Intelligenz - German Journal of
Artificial Intelligence - Springe
Robot task planning and explanation in open and uncertain worlds
A long-standing goal of AI is to enable robots to plan in the face of uncertain and incomplete information, and to handle task failure intelligently. This paper shows how to achieve this. There are two central ideas. The first idea is to organize the robot's knowledge into three layers: instance knowledge at the bottom, commonsense knowledge above that, and diagnostic knowledge on top. Knowledge in a layer above can be used to modify knowledge in the layer(s) below. The second idea is that the robot should represent not just how its actions change the world, but also what it knows or believes. There are two types of knowledge effects the robot's actions can have: epistemic effects (I believe X because I saw it) and assumptions (I'll assume X to be true). By combining the knowledge layers with the models of knowledge effects, we can simultaneously solve several problems in robotics: (i) task planning and execution under uncertainty; (ii) task planning and execution in open worlds; (iii) explaining task failure; (iv) verifying those explanations. The paper describes how the ideas are implemented in a three-layer architecture on a mobile robot platform. The robot implementation was evaluated in five different experiments on object search, mapping, and room categorization
Multimodal Computational Attention for Scene Understanding
Robotic systems have limited computational capacities. Hence, computational attention models are important to focus on specific stimuli and allow for complex cognitive processing. For this purpose, we developed auditory and visual attention models that enable robotic platforms to efficiently explore and analyze natural scenes. To allow for attention guidance in human-robot interaction, we use machine learning to integrate the influence of verbal and non-verbal social signals into our models
Machine Analysis of Facial Expressions
No abstract
Affective Computing
This book provides an overview of state of the art research in Affective Computing. It presents new ideas, original results and practical experiences in this increasingly important research field. The book consists of 23 chapters categorized into four sections. Since one of the most important means of human communication is facial expression, the first section of this book (Chapters 1 to 7) presents a research on synthesis and recognition of facial expressions. Given that we not only use the face but also body movements to express ourselves, in the second section (Chapters 8 to 11) we present a research on perception and generation of emotional expressions by using full-body motions. The third section of the book (Chapters 12 to 16) presents computational models on emotion, as well as findings from neuroscience research. In the last section of the book (Chapters 17 to 22) we present applications related to affective computing
- …