267 research outputs found
Bayesian perception of touch for control of robot emotion
In this paper, we present a Bayesian approach for
perception of touch and control of robot emotion. Touch is an
important sensing modality for the development of social robots,
and it is used in this work as stimulus through a human-robot
interaction. A Bayesian framework is proposed for perception of
various types of touch. This method together with a sequential
analysis approach allow the robot to accumulate evidence from
the interaction with humans to achieve accurate touch perception
for adaptable control of robot emotions. Facial expressions are
used to represent the emotions of the iCub humanoid. Emotions
in the robotic platform, based on facial expressions, are handled
by a control architecture that works with the output from the
touch perception process. We validate the accuracy of our system
with simulated and real robot touch experiments. Results from
this work show that our method is suitable and accurate for
perception of touch to control robot emotions, which is essential
for the development of sociable robots
Sensorimotor representation learning for an "active self" in robots: A model survey
Safe human-robot interactions require robots to be able to learn how to
behave appropriately in \sout{humans' world} \rev{spaces populated by people}
and thus to cope with the challenges posed by our dynamic and unstructured
environment, rather than being provided a rigid set of rules for operations. In
humans, these capabilities are thought to be related to our ability to perceive
our body in space, sensing the location of our limbs during movement, being
aware of other objects and agents, and controlling our body parts to interact
with them intentionally. Toward the next generation of robots with bio-inspired
capacities, in this paper, we first review the developmental processes of
underlying mechanisms of these abilities: The sensory representations of body
schema, peripersonal space, and the active self in humans. Second, we provide a
survey of robotics models of these sensory representations and robotics models
of the self; and we compare these models with the human counterparts. Finally,
we analyse what is missing from these robotics models and propose a theoretical
computational framework, which aims to allow the emergence of the sense of self
in artificial agents by developing sensory representations through
self-exploration
Sensorimotor Representation Learning for an “Active Self” in Robots: A Model Survey
Safe human-robot interactions require robots to be able to learn how to behave appropriately in spaces populated by people and thus to cope with the challenges posed by our dynamic and unstructured environment, rather than being provided a rigid set of rules for operations. In humans, these capabilities are thought to be related to our ability to perceive our body in space, sensing the location of our limbs during movement, being aware of other objects and agents, and controlling our body parts to interact with them intentionally. Toward the next generation of robots with bio-inspired capacities, in this paper, we first review the developmental processes of underlying mechanisms of these abilities: The sensory representations of body schema, peripersonal space, and the active self in humans. Second, we provide a survey of robotics models of these sensory representations and robotics models of the self; and we compare these models with the human counterparts. Finally, we analyze what is missing from these robotics models and propose a theoretical computational framework, which aims to allow the emergence of the sense of self in artificial agents by developing sensory representations through self-exploration.Deutsche Forschungsgemeinschaft
http://dx.doi.org/10.13039/501100001659Deutsche Forschungsgemeinschaft
http://dx.doi.org/10.13039/501100001659Deutsche Forschungsgemeinschaft
http://dx.doi.org/10.13039/501100001659Deutsche Forschungsgemeinschaft
http://dx.doi.org/10.13039/501100001659Deutsche Forschungsgemeinschaft
http://dx.doi.org/10.13039/501100001659Deutsche Forschungsgemeinschaft
http://dx.doi.org/10.13039/501100001659Projekt DEALPeer Reviewe
Coaching Imagery to Athletes with Aphantasia
We administered the Plymouth Sensory Imagery Questionnaire (Psi-Q) which tests multi-sensory imagery, to athletes (n=329) from 9 different sports to locate poor/aphantasic (baseline scores <4.2/10) imagers with the aim to subsequently enhance imagery ability. The low imagery sample (n=27) were randomly split into two groups who received the intervention: Functional Imagery Training (FIT), either immediately, or delayed by one month at which point the delayed group were tested again on the Psi-Q. All participants were tested after FIT delivery and six months post intervention. The delayed group showed no significant change between baseline and the start of FIT delivery but both groups imagery score improved significantly (p=0.001) after the intervention which was maintained six months post intervention. This indicates that imagery can be trained, with those who identify as having aphantasia (although one participant did not improve on visual scores), and improvements maintained in poor imagers. Follow up interviews (n=22) on sporting application revealed that the majority now use imagery daily on process goals. Recommendations are given for ways to assess and train imagery in an applied sport setting
Recommended from our members
Memory and mental time travel in humans and social robots.
From neuroscience, brain imaging and the psychology of memory, we are beginning to assemble an integrated theory of the brain subsystems and pathways that allow the compression, storage and reconstruction of memories for past events and their use in contextualizing the present and reasoning about the future-mental time travel (MTT). Using computational models, embedded in humanoid robots, we are seeking to test the sufficiency of this theoretical account and to evaluate the usefulness of brain-inspired memory systems for social robots. In this contribution, we describe the use of machine learning techniques-Gaussian process latent variable models-to build a multimodal memory system for the iCub humanoid robot and summarize results of the deployment of this system for human-robot interaction. We also outline the further steps required to create a more complete robotic implementation of human-like autobiographical memory and MTT. We propose that generative memory models, such as those that form the core of our robot memory system, can provide a solution to the symbol grounding problem in embodied artificial intelligence. This article is part of the theme issue 'From social brains to social robots: applying neurocognitive insights to human-robot interaction'.Funding. The preparation of this chapter was supported by funding
from the EU Seventh Framework Programme as part of the projects
Experimental Functional Android Assistant (EFAA, FP7-ICT-270490)
and What You Say Is What You Did (WYSIWYD, FP7-ICT-612139)
and by the EU H2020 Programme as part of the Human Brain Project
(HBP-SGA1, 720270; HBP-SGA2, 785907).
Acknowledgements. The authors are grateful to Paul Verschure, Peter
Dominey, Giorgio Metta, Yiannis Demiris and the other members
of the WYSIWYD and EFAA consortia; to members of the HBP EPISENSE
group; and to our colleagues at the University of Sheffield
who have helped us to develop memory systems for the iCub, particularly
Luke Boorman, Harry Jackson and Matthew Evans. The
Sheffield iCub was purchased with the support of the UK Engineering
and Physical Sciences Research Council (EPSRC)
Change blindness: eradication of gestalt strategies
Arrays of eight, texture-defined rectangles were used as stimuli in a one-shot change blindness (CB) task where there was a 50% chance that one rectangle would change orientation between two successive presentations separated by an interval. CB was eliminated by cueing the target rectangle in the first stimulus, reduced by cueing in the interval and unaffected by cueing in the second presentation. This supports the idea that a representation was formed that persisted through the interval before being 'overwritten' by the second presentation (Landman et al, 2003 Vision Research 43149–164]. Another possibility is that participants used some kind of grouping or Gestalt strategy. To test this we changed the spatial position of the rectangles in the second presentation by shifting them along imaginary spokes (by ±1 degree) emanating from the central fixation point. There was no significant difference seen in performance between this and the standard task [F(1,4)=2.565, p=0.185]. This may suggest two things: (i) Gestalt grouping is not used as a strategy in these tasks, and (ii) it gives further weight to the argument that objects may be stored and retrieved from a pre-attentional store during this task
- …