714 research outputs found
Timing and correction of stepping movements with a virtual reality avatar
Research into the ability to coordinate oneâs movements with external cues has focussed on the use of simple rhythmic, auditory and visual stimuli, or interpersonal coordination with another person. Coordinating movements with a virtual avatar has not been explored, in the context of responses to temporal cues. To determine whether cueing of movements using a virtual avatar is effective, peopleâs ability to accurately coordinate with the stimuli needs to be investigated. Here we focus on temporal cues, as we know from timing studies that visual cues can be difficult to follow in the timing context.
Real stepping movements were mapped onto an avatar using motion capture data. Healthy participants were then motion captured whilst stepping in time with the avatarâs movements, as viewed through a virtual reality headset. The timing of one of the avatar step cycles was accelerated or decelerated by 15% to create a temporal perturbation, for which participants would need to correct to, in order to remain in time. Step onset times of participants relative to the corresponding step-onsets of the avatar were used to measure the timing errors (asynchronies) between them. Participants completed either a visual-only condition, or auditory-visual with footstep sounds included, at two stepping tempo conditions (Fast: 400ms interval, Slow: 800ms interval).
Participantsâ asynchronies exhibited slow drift in the Visual-Only condition, but became stable in the Auditory-Visual condition. Moreover, we observed a clear corrective response to the phase perturbation in both the fast and slow tempo auditory-visual conditions.
We conclude that an avatarâs movements can be used to influence a personâs own motion, but should include relevant auditory cues congruent with the movement to ensure a suitable level of entrainment is achieved. This approach has applications in physiotherapy, where virtual avatars present an opportunity to provide the guidance to assist patients in adhering to prescribed exercises
A Posture Sequence Learning System for an Anthropomorphic Robotic Hand
The paper presents a cognitive architecture for posture learning of an anthropomorphic robotic hand. Our approach is aimed to allow the robotic system to perform complex perceptual operations, to interact with a human user and to integrate the perceptions by a cognitive representation of the scene and the observed actions. The anthropomorphic robotic hand imitates the gestures acquired by the vision system in order to learn meaningful movements, to build its knowledge by different conceptual spaces and to perform complex interaction with the human operator
Developing a robot-guided interactive simon game for physical and cognitive training
Enveloping cognitive or physical rehabilitation into a game highly increases the patients' commitment with their treatment. Specially with children, keeping them motivated is a very time-consuming work, so therapists are demanding tools to help them with this task. NAOTherapist is a generic robotic architecture that uses Automated Planning techniques to autonomously drive noncontact upper-limb rehabilitation sessions for children with a humanoid NAO robot. Our aim is to develop more robotic games for this platform to enrich its variability and possibilities of interaction. The goal of this work is to present our first attempt to develop a different, more complex game that reuses the previous architecture. We contribute with the design description of a novel robotic Simon game that employs upper-limb poses instead of colors and could qualify as a cognitive and physical training. Statistics of evaluation tests with 14 adults and 56 children are displayed and the outcomes are analyzed in terms of human-robot interaction (HRI) quality. The results demonstrate the application-domain generalization capabilities of the NAOTherapist architecture and give an insight to further analyze the therapeutic benefits of the new developed Simon game.This work is partially funded by grant TIN2012-38079-C03-02 and TIN2015-65686-
C5-1-R of Spanish Ministerio de EconomĂa y Competitividad. We also want to
thank the Joan Miró school of Leganés for their assistance with the evaluations, to
the teachers and the management team for their support, and specially to all the
children who kindly participated in the evaluation and enjoyed playing with our
robots
Computational Methods for Cognitive and Cooperative Robotics
In the last decades design methods in control engineering made substantial progress in
the areas of robotics and computer animation. Nowadays these methods incorporate the
newest developments in machine learning and artificial intelligence. But the problems
of flexible and online-adaptive combinations of motor behaviors remain challenging for
human-like animations and for humanoid robotics. In this context, biologically-motivated
methods for the analysis and re-synthesis of human motor programs provide new insights
in and models for the anticipatory motion synthesis.
This thesis presents the authorâs achievements in the areas of cognitive and developmental robotics, cooperative and humanoid robotics and intelligent and machine learning methods in computer graphics. The first part of the thesis in the chapter âGoal-directed Imitation for Robotsâ considers imitation learning in cognitive and developmental robotics.
The work presented here details the authorâs progress in the development of hierarchical
motion recognition and planning inspired by recent discoveries of the functions of mirror-neuron cortical circuits in primates. The overall architecture is capable of âlearning for
imitationâ and âlearning by imitationâ. The complete system includes a low-level real-time
capable path planning subsystem for obstacle avoidance during arm reaching. The learning-based path planning subsystem is universal for all types of anthropomorphic robot arms, and is capable of knowledge transfer at the level of individual motor acts.
Next, the problems of learning and synthesis of motor synergies, the spatial and spatio-temporal combinations of motor features in sequential multi-action behavior, and the
problems of task-related action transitions are considered in the second part of the thesis
âKinematic Motion Synthesis for Computer Graphics and Roboticsâ. In this part, a new
approach of modeling complex full-body human actions by mixtures of time-shift invariant
motor primitives in presented. The online-capable full-body motion generation architecture
based on dynamic movement primitives driving the time-shift invariant motor synergies
was implemented as an online-reactive adaptive motion synthesis for computer graphics
and robotics applications.
The last chapter of the thesis entitled âContraction Theory and Self-organized Scenarios
in Computer Graphics and Roboticsâ is dedicated to optimal control strategies in multi-agent scenarios of large crowds of agents expressing highly nonlinear behaviors. This last
part presents new mathematical tools for stability analysis and synthesis of multi-agent
cooperative scenarios.In den letzten Jahrzehnten hat die Forschung in den Bereichen der Steuerung und Regelung
komplexer Systeme erhebliche Fortschritte gemacht, insbesondere in den Bereichen
Robotik und Computeranimation. Die Entwicklung solcher Systeme verwendet heutzutage
neueste Methoden und Entwicklungen im Bereich des maschinellen Lernens und der
kĂŒnstlichen Intelligenz. Die flexible und echtzeitfĂ€hige Kombination von motorischen Verhaltensweisen
ist eine wesentliche Herausforderung fĂŒr die Generierung menschenĂ€hnlicher
Animationen und in der humanoiden Robotik. In diesem Zusammenhang liefern biologisch
motivierte Methoden zur Analyse und Resynthese menschlicher motorischer Programme
neue Erkenntnisse und Modelle fĂŒr die antizipatorische Bewegungssynthese.
Diese Dissertation prÀsentiert die Ergebnisse der Arbeiten des Autors im Gebiet der
kognitiven und Entwicklungsrobotik, kooperativer und humanoider Robotersysteme sowie
intelligenter und maschineller Lernmethoden in der Computergrafik. Der erste Teil der
Dissertation im Kapitel âZielgerichtete Nachahmung fĂŒr Roboterâ behandelt das Imitationslernen
in der kognitiven und Entwicklungsrobotik. Die vorgestellten Arbeiten beschreiben
neue Methoden fĂŒr die hierarchische Bewegungserkennung und -planung, die durch
Erkenntnisse zur Funktion der kortikalen Spiegelneuronen-Schaltkreise bei Primaten inspiriert
wurden. Die entwickelte Architektur ist in der Lage, âdurch Imitation zu lernenâ
und âzu lernen zu imitierenâ. Das komplette entwickelte System enthĂ€lt ein echtzeitfĂ€higes
Pfadplanungssubsystem zur Hindernisvermeidung wĂ€hrend der DurchfĂŒhrung von Armbewegungen.
Das lernbasierte Pfadplanungssubsystem ist universell und fĂŒr alle Arten von
anthropomorphen Roboterarmen in der Lage, Wissen auf der Ebene einzelner motorischer
Handlungen zu ĂŒbertragen.
Im zweiten Teil der Arbeit âKinematische Bewegungssynthese fĂŒr Computergrafik und
Robotikâ werden die Probleme des Lernens und der Synthese motorischer Synergien, d.h.
von rÀumlichen und rÀumlich-zeitlichen Kombinationen motorischer Bewegungselemente
bei Bewegungssequenzen und bei aufgabenbezogenen Handlungs ĂŒbergĂ€ngen behandelt.
Es wird ein neuer Ansatz zur Modellierung komplexer menschlicher Ganzkörperaktionen
durch Mischungen von zeitverschiebungsinvarianten Motorprimitiven vorgestellt. Zudem
wurde ein online-fĂ€higer Synthesealgorithmus fĂŒr Ganzköperbewegungen entwickelt, der
auf dynamischen Bewegungsprimitiven basiert, die wiederum auf der Basis der gelernten
verschiebungsinvarianten Primitive konstruiert werden. Dieser Algorithmus wurde fĂŒr
verschiedene Probleme der Bewegungssynthese fĂŒr die Computergrafik- und Roboteranwendungen
implementiert.
Das letzte Kapitel der Dissertation mit dem Titel âKontraktionstheorie und selbstorganisierte
Szenarien in der Computergrafik und Robotikâ widmet sich optimalen Kontrollstrategien
in Multi-Agenten-Szenarien, wobei die Agenten durch eine hochgradig nichtlineare
Kinematik gekennzeichnet sind. Dieser letzte Teil prÀsentiert neue mathematische Werkzeuge
fĂŒr die StabilitĂ€tsanalyse und Synthese von kooperativen Multi-Agenten-Szenarien
Brain response to a humanoid robot in areas implicated in the perception of human emotional gestures
BACKGROUND: The humanoid robot WE4-RII was designed to express human emotions in order to improve human-robot
interaction. We can read the emotions depicted in its gestures, yet might utilize different neural processes than those used
for reading the emotions in human agents.
METHODOLOGY: Here, fMRI was used to assess how brain areas activated by the perception of human basic emotions (facial
expression of Anger, Joy, Disgust) and silent speech respond to a humanoid robot impersonating the same emotions, while
participants were instructed to attend either to the emotion or to the motion depicted.
PRINCIPAL FINDINGS: Increased responses to robot compared to human stimuli in the occipital and posterior temporal
cortices suggest additional visual processing when perceiving a mechanical anthropomorphic agent. In contrast, activity in
cortical areas endowed with mirror properties, like left Brocaâs area for the perception of speech, and in the processing of
emotions like the left anterior insula for the perception of disgust and the orbitofrontal cortex for the perception of anger, is
reduced for robot stimuli, suggesting lesser resonance with the mechanical agent. Finally, instructions to explicitly attend to
the emotion significantly increased response to robot, but not human facial expressions in the anterior part of the left
inferior frontal gyrus, a neural marker of motor resonance.
CONCLUSIONS: Motor resonance towards a humanoid robot, but not a human, display of facial emotion is increased when
attention is directed towards judging emotions.
SIGNIFICANCE: Artificial agents can be used to assess how factors like anthropomorphism affect neural response to the
perception of human actions
The shaping of social perception by stimulus and knowledge cues to human animacy
Contains fulltext :
151462.pdf (publisher's version ) (Closed access)Although robots are becoming an ever-growing presence in society, we do not hold the same expectations for robots as we do for humans, nor do we treat them the same. As such, the ability to recognize cues to human animacy is fundamental for guiding social interactions. We review literature that demonstrates cortical networks associated with person perception, action observation and mentalizing are sensitive to human animacy information. In addition, we show that most prior research has explored stimulus properties of artificial agents (humanness of appearance or motion), with less investigation into knowledge cues (whether an agent is believed to have human or artificial origins). Therefore, currently little is known about the relationship between stimulus and knowledge cues to human animacy in terms of cognitive and brain mechanisms. Using fMRI, an elaborate belief manipulation, and human and robot avatars, we found that knowledge cues to human animacy modulate engagement of person perception and mentalizing networks, while stimulus cues to human animacy had less impact on social brain networks. These findings demonstrate that self-other similarities are not only grounded in physical features but are also shaped by prior knowledge. More broadly, as artificial agents fulfil increasingly social roles, a challenge for roboticists will be to manage the impact of pre-conceived beliefs while optimizing human-like design.12 p
- âŠ