248 research outputs found
Guest editorial: sensorimotor contingencies for cognitive robotics
© 20xx IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.The sensorimotor approach to cognition states, that the key to bring semantics to the world of a robot, requires making the robot learn the relation between the actions that the robot performs and the change it experiences in its sensed data because of those actions. Those relations are called sensorimotor contingencies (SMCs). This special issue presents a variety of recent developments in SMCs with a particular focus on cognitive robotics applications.Peer ReviewedPostprint (author's final draft
Identification of Invariant Sensorimotor Structures as a Prerequisite for the Discovery of Objects
Perceiving the surrounding environment in terms of objects is useful for any
general purpose intelligent agent. In this paper, we investigate a fundamental
mechanism making object perception possible, namely the identification of
spatio-temporally invariant structures in the sensorimotor experience of an
agent. We take inspiration from the Sensorimotor Contingencies Theory to define
a computational model of this mechanism through a sensorimotor, unsupervised
and predictive approach. Our model is based on processing the unsupervised
interaction of an artificial agent with its environment. We show how
spatio-temporally invariant structures in the environment induce regularities
in the sensorimotor experience of an agent, and how this agent, while building
a predictive model of its sensorimotor experience, can capture them as densely
connected subgraphs in a graph of sensory states connected by motor commands.
Our approach is focused on elementary mechanisms, and is illustrated with a set
of simple experiments in which an agent interacts with an environment. We show
how the agent can build an internal model of moving but spatio-temporally
invariant structures by performing a Spectral Clustering of the graph modeling
its overall sensorimotor experiences. We systematically examine properties of
the model, shedding light more globally on the specificities of the paradigm
with respect to methods based on the supervised processing of collections of
static images.Comment: 24 pages, 10 figures, published in Frontiers Robotics and A
Adaptive robot body learning and estimation through predictive coding
The predictive functions that permit humans to infer their body state by
sensorimotor integration are critical to perform safe interaction in complex
environments. These functions are adaptive and robust to non-linear actuators
and noisy sensory information. This paper introduces a computational perceptual
model based on predictive processing that enables any multisensory robot to
learn, infer and update its body configuration when using arbitrary sensors
with Gaussian additive noise. The proposed method integrates different sources
of information (tactile, visual and proprioceptive) to drive the robot belief
to its current body configuration. The motivation is to enable robots with the
embodied perception needed for self-calibration and safe physical human-robot
interaction.
We formulate body learning as obtaining the forward model that encodes the
sensor values depending on the body variables, and we solve it by Gaussian
process regression. We model body estimation as minimizing the discrepancy
between the robot body configuration belief and the observed posterior. We
minimize the variational free energy using the sensory prediction errors
(sensed vs expected).
In order to evaluate the model we test it on a real multisensory robotic arm.
We show how different sensor modalities contributions, included as additive
errors, improve the refinement of the body estimation and how the system adapts
itself to provide the most plausible solution even when injecting strong
sensory visuo-tactile perturbations. We further analyse the reliability of the
model when different sensor modalities are disabled. This provides grounded
evidence about the correctness of the perceptual model and shows how the robot
estimates and adjusts its body configuration just by means of sensory
information.Comment: Accepted for IEEE International Conference on Intelligent Robots and
Systems (IROS 2018
Sensorimotor Representation Learning for an “Active Self” in Robots: A Model Survey
Safe human-robot interactions require robots to be able to learn how to behave appropriately in spaces populated by people and thus to cope with the challenges posed by our dynamic and unstructured environment, rather than being provided a rigid set of rules for operations. In humans, these capabilities are thought to be related to our ability to perceive our body in space, sensing the location of our limbs during movement, being aware of other objects and agents, and controlling our body parts to interact with them intentionally. Toward the next generation of robots with bio-inspired capacities, in this paper, we first review the developmental processes of underlying mechanisms of these abilities: The sensory representations of body schema, peripersonal space, and the active self in humans. Second, we provide a survey of robotics models of these sensory representations and robotics models of the self; and we compare these models with the human counterparts. Finally, we analyze what is missing from these robotics models and propose a theoretical computational framework, which aims to allow the emergence of the sense of self in artificial agents by developing sensory representations through self-exploration.Deutsche Forschungsgemeinschaft
http://dx.doi.org/10.13039/501100001659Deutsche Forschungsgemeinschaft
http://dx.doi.org/10.13039/501100001659Deutsche Forschungsgemeinschaft
http://dx.doi.org/10.13039/501100001659Deutsche Forschungsgemeinschaft
http://dx.doi.org/10.13039/501100001659Deutsche Forschungsgemeinschaft
http://dx.doi.org/10.13039/501100001659Deutsche Forschungsgemeinschaft
http://dx.doi.org/10.13039/501100001659Projekt DEALPeer Reviewe
Balancing Exploration and Exploitation : A Neurally Inspired Mechanism to Learn Sensorimotor Contingencies
The learning of sensorimotor contingencies is essential for the development of early cognition. Here, we investigate how such process takes place on a neural level. We propose a theoretical concept for learning sensorimotor contingencies based on motor babbling with a robotic arm and dynamic neural fields. The robot learns to perform sequences of motor commands in order to perceive visual activation from a baby mobile toy. First, the robot explores the different sensorimotor outcomes, then autonomously decides to utilize (or not) the experience already gathered. Moreover, we introduce a neural mechanism inspired by recent neuroscience research that supports the switch between exploration and exploitation. The complete model relies on dynamic field theory, which consists of a set of interconnected dynamical systems. In time, the robot demonstrates a behavior toward the exploitation of previously learned sensorimotor contingencies and thus selecting actions that induce high visual activation.acceptedVersionPeer reviewe
Drifting perceptual patterns suggest prediction errors fusion rather than hypothesis selection: replicating the rubber-hand illusion on a robot
Humans can experience fake body parts as theirs just by simple visuo-tactile
synchronous stimulation. This body-illusion is accompanied by a drift in the
perception of the real limb towards the fake limb, suggesting an update of body
estimation resulting from stimulation. This work compares body limb drifting
patterns of human participants, in a rubber hand illusion experiment, with the
end-effector estimation displacement of a multisensory robotic arm enabled with
predictive processing perception. Results show similar drifting patterns in
both human and robot experiments, and they also suggest that the perceptual
drift is due to prediction error fusion, rather than hypothesis selection. We
present body inference through prediction error minimization as one single
process that unites predictive coding and causal inference and that it is
responsible for the effects in perception when we are subjected to intermodal
sensory perturbations.Comment: Proceedings of the 2018 IEEE International Conference on Development
and Learning and Epigenetic Robotic
Mapping Information Flow in Sensorimotor Networks
Biological organisms continuously select and sample information used by their neural structures for perception and action, and for creating coherent cognitive states guiding their autonomous behavior. Information processing, however, is not solely an internal function of the nervous system. Here we show, instead, how sensorimotor interaction and body morphology can induce statistical regularities and information structure in sensory inputs and within the neural control architecture, and how the flow of information between sensors, neural units, and effectors is actively shaped by the interaction with the environment. We analyze sensory and motor data collected from real and simulated robots and reveal the presence of information structure and directed information flow induced by dynamically coupled sensorimotor activity, including effects of motor outputs on sensory inputs. We find that information structure and information flow in sensorimotor networks (a) is spatially and temporally specific; (b) can be affected by learning, and (c) can be affected by changes in body morphology. Our results suggest a fundamental link between physical embeddedness and information, highlighting the effects of embodied interactions on internal (neural) information processing, and illuminating the role of various system components on the generation of behavior
End-to-End Pixel-Based Deep Active Inference for Body Perception and Action
We present a pixel-based deep active inference algorithm (PixelAI) inspired
by human body perception and action. Our algorithm combines the free-energy
principle from neuroscience, rooted in variational inference, with deep
convolutional decoders to scale the algorithm to directly deal with raw visual
input and provide online adaptive inference. Our approach is validated by
studying body perception and action in a simulated and a real Nao robot.
Results show that our approach allows the robot to perform 1) dynamical body
estimation of its arm using only monocular camera images and 2) autonomous
reaching to "imagined" arm poses in the visual space. This suggests that robot
and human body perception and action can be efficiently solved by viewing both
as an active inference problem guided by ongoing sensory input
On dynamical systems for sensorimotor contingencies. A first approach from control engineering
According to the sensorimotor approach, cognition is constituted by regularities in the perceptual experiences of an active and situated agent. This perspective rejects traditional inner representational models, stressing instead patterns of sensorimotor dependencies. Those relations are called sensorimotor contingencies (SMC). Many research areas and accounts are working on and related with it. In particular, four distinct kinds of SMCs have been previously introduced for environment, habitat, coordination and strategy using dynamical models from a psychological perspective. As dynamical systems, in this paper we analyze SMCs, for the very first time, from a modern control engineering perspective. We provide equations and block diagrams translating the psychological proposal to control engineering. We also analyze the original toy example proposed from the psychological domain into the modern control engineering point of view, as well as we propose a first approach to this toy example coming from the control engineering domain.Postprint (published version
- …