1,233 research outputs found
Goal-Directed Behavior under Variational Predictive Coding: Dynamic Organization of Visual Attention and Working Memory
Mental simulation is a critical cognitive function for goal-directed behavior
because it is essential for assessing actions and their consequences. When a
self-generated or externally specified goal is given, a sequence of actions
that is most likely to attain that goal is selected among other candidates via
mental simulation. Therefore, better mental simulation leads to better
goal-directed action planning. However, developing a mental simulation model is
challenging because it requires knowledge of self and the environment. The
current paper studies how adequate goal-directed action plans of robots can be
mentally generated by dynamically organizing top-down visual attention and
visual working memory. For this purpose, we propose a neural network model
based on variational Bayes predictive coding, where goal-directed action
planning is formulated by Bayesian inference of latent intentional space. Our
experimental results showed that cognitively meaningful competencies, such as
autonomous top-down attention to the robot end effector (its hand) as well as
dynamic organization of occlusion-free visual working memory, emerged.
Furthermore, our analysis of comparative experiments indicated that
introduction of visual working memory and the inference mechanism using
variational Bayes predictive coding significantly improve the performance in
planning adequate goal-directed actions
Visuomotor Associative Learning under the Predictive Coding Framework: a Neuro-robotics Experiment
This study aims to introduce our approach to build cognitive agent based on predictive coding of visual and proprioceptive signals. We assume that a robot can develop cognitive skills by learning sensorimotor experience end-to-end in a hierarchical neural network model. The results from the neuro-robotics experiment illustrated the role of visuomotor learning in achieving cognitive behaviors and highlighted the importance of prediction error minimization, supporting predictive coding account of mirror neuron system
Generating Goal-directed Visuomotor Plans with Supervised Learning using a Predictive Coding Deep Visuomotor Recurrent Neural Network
The ability to plan and visualize object manipulation in advance is vital for both humans and robots to smoothly reach a desired goal state. In this work, we demonstrate how our predictive coding based deep visuomotor recurrent neural network (PDVMRNN) can generate plans for a robot to manipulate objects based on a visual goal. A Tokyo Robotics Torobo Arm robot and a basic USB camera were used to record visuo-proprioceptive sequences of object manipulation. Although limitations in resolution resulted in lower success rates when plans were executed with the robot, our model is able to generate long predictions from novel start and goal states based on the learned patterns
Sensorimotor Representation Learning for an “Active Self” in Robots: A Model Survey
Safe human-robot interactions require robots to be able to learn how to behave appropriately in spaces populated by people and thus to cope with the challenges posed by our dynamic and unstructured environment, rather than being provided a rigid set of rules for operations. In humans, these capabilities are thought to be related to our ability to perceive our body in space, sensing the location of our limbs during movement, being aware of other objects and agents, and controlling our body parts to interact with them intentionally. Toward the next generation of robots with bio-inspired capacities, in this paper, we first review the developmental processes of underlying mechanisms of these abilities: The sensory representations of body schema, peripersonal space, and the active self in humans. Second, we provide a survey of robotics models of these sensory representations and robotics models of the self; and we compare these models with the human counterparts. Finally, we analyze what is missing from these robotics models and propose a theoretical computational framework, which aims to allow the emergence of the sense of self in artificial agents by developing sensory representations through self-exploration.Deutsche Forschungsgemeinschaft
http://dx.doi.org/10.13039/501100001659Deutsche Forschungsgemeinschaft
http://dx.doi.org/10.13039/501100001659Deutsche Forschungsgemeinschaft
http://dx.doi.org/10.13039/501100001659Deutsche Forschungsgemeinschaft
http://dx.doi.org/10.13039/501100001659Deutsche Forschungsgemeinschaft
http://dx.doi.org/10.13039/501100001659Deutsche Forschungsgemeinschaft
http://dx.doi.org/10.13039/501100001659Projekt DEALPeer Reviewe
Sensorimotor representation learning for an "active self" in robots: A model survey
Safe human-robot interactions require robots to be able to learn how to
behave appropriately in \sout{humans' world} \rev{spaces populated by people}
and thus to cope with the challenges posed by our dynamic and unstructured
environment, rather than being provided a rigid set of rules for operations. In
humans, these capabilities are thought to be related to our ability to perceive
our body in space, sensing the location of our limbs during movement, being
aware of other objects and agents, and controlling our body parts to interact
with them intentionally. Toward the next generation of robots with bio-inspired
capacities, in this paper, we first review the developmental processes of
underlying mechanisms of these abilities: The sensory representations of body
schema, peripersonal space, and the active self in humans. Second, we provide a
survey of robotics models of these sensory representations and robotics models
of the self; and we compare these models with the human counterparts. Finally,
we analyse what is missing from these robotics models and propose a theoretical
computational framework, which aims to allow the emergence of the sense of self
in artificial agents by developing sensory representations through
self-exploration
- …