1,662 research outputs found
Interactive Robot Learning of Gestures, Language and Affordances
A growing field in robotics and Artificial Intelligence (AI) research is
human-robot collaboration, whose target is to enable effective teamwork between
humans and robots. However, in many situations human teams are still superior
to human-robot teams, primarily because human teams can easily agree on a
common goal with language, and the individual members observe each other
effectively, leveraging their shared motor repertoire and sensorimotor
resources. This paper shows that for cognitive robots it is possible, and
indeed fruitful, to combine knowledge acquired from interacting with elements
of the environment (affordance exploration) with the probabilistic observation
of another agent's actions.
We propose a model that unites (i) learning robot affordances and word
descriptions with (ii) statistical recognition of human gestures with vision
sensors. We discuss theoretical motivations, possible implementations, and we
show initial results which highlight that, after having acquired knowledge of
its surrounding environment, a humanoid robot can generalize this knowledge to
the case when it observes another agent (human partner) performing the same
motor actions previously executed during training.Comment: code available at https://github.com/gsaponaro/glu-gesture
Exploiting the robot kinematic redundancy for emotion conveyance to humans as a lower priority task
Current approaches do not allow robots to execute a task and simultaneously convey emotions to users using their body motions. This paper explores the capabilities of the Jacobian null space of a humanoid robot to convey emotions. A task priority formulation has been implemented in a Pepper robot which allows the specification of a primary task (waving gesture, transportation of an object, etc.) and exploits the kinematic redundancy of the robot to convey emotions to humans as a lower priority task. The emotions, defined by Mehrabian as points in the pleasure–arousal–dominance space, generate intermediate motion features (jerkiness, activity and gaze) that carry the emotional information. A map from this features to the joints of the robot is presented. A user study has been conducted in which emotional motions have been shown to 30 participants. The results show that happiness and sadness are very well conveyed to the user, calm is moderately well conveyed, and fear is not well conveyed. An analysis on the dependencies between the motion features and the emotions perceived by the participants shows that activity correlates positively with arousal, jerkiness is not perceived by the user, and gaze conveys dominance when activity is low. The results indicate a strong influence of the most energetic motions of the emotional task and point out new directions for further research. Overall, the results show that the null space approach can be regarded as a promising mean to convey emotions as a lower priority task.Postprint (author's final draft
Synthesis of variable dancing styles based on a compact spatiotemporal representation of dance
Dance as a complex expressive form of motion is able to convey emotion, meaning and social idiosyncrasies that opens channels for non-verbal communication, and promotes rich cross-modal interactions with music and the environment. As such, realistic dancing characters may incorporate crossmodal information and variability of the dance forms through compact representations that may describe the movement structure in terms of its spatial and temporal organization. In this paper, we propose a novel method for synthesizing beatsynchronous dancing motions based on a compact topological model of dance styles, previously captured with a motion capture system. The model was based on the Topological Gesture Analysis (TGA) which conveys a discrete three-dimensional point-cloud representation of the dance, by describing the spatiotemporal variability of its gestural trajectories into uniform spherical distributions, according to classes of the musical meter. The methodology for synthesizing the modeled dance traces back the topological representations, constrained with definable metrical and spatial parameters, into complete dance instances whose variability is controlled by stochastic processes that considers both TGA distributions and the kinematic constraints of the body morphology. In order to assess the relevance and flexibility of each parameter into feasibly reproducing the style of the captured dance, we correlated both captured and synthesized trajectories of samba dancing sequences in relation to the level of compression of the used model, and report on a subjective evaluation over a set of six tests. The achieved results validated our approach, suggesting that a periodic dancing style, and its musical synchrony, can be feasibly reproduced from a suitably parametrized discrete spatiotemporal representation of the gestural motion trajectories, with a notable degree of compression
Using Gestures to Resolve Lexical Ambiguity in Storytelling with Humanoid Robots
Gestures that co-occur with speech are a fundamental component of communication. Prior research with children suggests that gestures may help them to resolve certain forms of lexical ambiguity, including homophones. To test this idea in the context of human-robot interaction, the effects of iconic and deictic gestures on the understanding of homophones was assessed in an experiment where a humanoid robot told a short story containing pairs of homophones to small groups of young participants, accompanied by either expressive gestures or no gestures. Both groups of subjects completed a pretest and post-test to measure their ability to discriminate between pairs of homophones and we calculated aggregated precision. The results show that the use of iconic and deictic gestures aids in general understanding of homophones, providing additional evidence for the importance of gesture to the development of children’s language and communication skills
Study of the Importance of Adequacy to Robot Verbal and Non Verbal Communication in Human-Robot interaction
The Robadom project aims at creating a homecare robot that help and assist
people in their daily life, either in doing task for the human or in managing
day organization. A robot could have this kind of role only if it is accepted
by humans. Before thinking about the robot appearance, we decided to evaluate
the importance of the relation between verbal and nonverbal communication
during a human-robot interaction in order to determine the situation where the
robot is accepted. We realized two experiments in order to study this
acceptance. The first experiment studied the importance of having robot
nonverbal behavior in relation of its verbal behavior. The second experiment
studied the capability of a robot to provide a correct human-robot interaction.Comment: the 43rd Symposium on Robotics - ISR 2012, Taipei : Taiwan, Province
Of China (2012
Social Situatedness: Vygotsky and Beyond
The concept of ‘social situatedness’, i.e. the idea that the development of individual intelligence requires a social (and cultural) embedding, has recently received much attention in cognitive science and artificial intelligence research. The work of Lev Vygotsky who put forward this view already in the 1920s has influenced the discussion to some degree, but still remains far from well known. This paper therefore aims to give an overview of his cognitive development theory and discuss its relation to more recent work in primatology and socially situated artificial intelligence, in particular humanoid robotics
- …