7,128 research outputs found
Adaptive robot body learning and estimation through predictive coding
The predictive functions that permit humans to infer their body state by
sensorimotor integration are critical to perform safe interaction in complex
environments. These functions are adaptive and robust to non-linear actuators
and noisy sensory information. This paper introduces a computational perceptual
model based on predictive processing that enables any multisensory robot to
learn, infer and update its body configuration when using arbitrary sensors
with Gaussian additive noise. The proposed method integrates different sources
of information (tactile, visual and proprioceptive) to drive the robot belief
to its current body configuration. The motivation is to enable robots with the
embodied perception needed for self-calibration and safe physical human-robot
interaction.
We formulate body learning as obtaining the forward model that encodes the
sensor values depending on the body variables, and we solve it by Gaussian
process regression. We model body estimation as minimizing the discrepancy
between the robot body configuration belief and the observed posterior. We
minimize the variational free energy using the sensory prediction errors
(sensed vs expected).
In order to evaluate the model we test it on a real multisensory robotic arm.
We show how different sensor modalities contributions, included as additive
errors, improve the refinement of the body estimation and how the system adapts
itself to provide the most plausible solution even when injecting strong
sensory visuo-tactile perturbations. We further analyse the reliability of the
model when different sensor modalities are disabled. This provides grounded
evidence about the correctness of the perceptual model and shows how the robot
estimates and adjusts its body configuration just by means of sensory
information.Comment: Accepted for IEEE International Conference on Intelligent Robots and
Systems (IROS 2018
Drifting perceptual patterns suggest prediction errors fusion rather than hypothesis selection: replicating the rubber-hand illusion on a robot
Humans can experience fake body parts as theirs just by simple visuo-tactile
synchronous stimulation. This body-illusion is accompanied by a drift in the
perception of the real limb towards the fake limb, suggesting an update of body
estimation resulting from stimulation. This work compares body limb drifting
patterns of human participants, in a rubber hand illusion experiment, with the
end-effector estimation displacement of a multisensory robotic arm enabled with
predictive processing perception. Results show similar drifting patterns in
both human and robot experiments, and they also suggest that the perceptual
drift is due to prediction error fusion, rather than hypothesis selection. We
present body inference through prediction error minimization as one single
process that unites predictive coding and causal inference and that it is
responsible for the effects in perception when we are subjected to intermodal
sensory perturbations.Comment: Proceedings of the 2018 IEEE International Conference on Development
and Learning and Epigenetic Robotic
Active Inference for Integrated State-Estimation, Control, and Learning
This work presents an approach for control, state-estimation and learning
model (hyper)parameters for robotic manipulators. It is based on the active
inference framework, prominent in computational neuroscience as a theory of the
brain, where behaviour arises from minimizing variational free-energy. The
robotic manipulator shows adaptive and robust behaviour compared to
state-of-the-art methods. Additionally, we show the exact relationship to
classic methods such as PID control. Finally, we show that by learning a
temporal parameter and model variances, our approach can deal with unmodelled
dynamics, damps oscillations, and is robust against disturbances and poor
initial parameters. The approach is validated on the `Franka Emika Panda' 7 DoF
manipulator.Comment: 7 pages, 6 figures, accepted for presentation at the International
Conference on Robotics and Automation (ICRA) 202
A brief review of neural networks based learning and control and their applications for robots
As an imitation of the biological nervous systems, neural networks (NN), which are characterized with powerful learning ability, have been employed in a wide range of applications, such as control of complex nonlinear systems, optimization, system identification and patterns recognition etc. This article aims to bring a brief review of the state-of-art NN for the complex nonlinear systems. Recent progresses of NNs in both theoretical developments and practical applications are investigated and surveyed. Specifically, NN based robot learning and control applications were further reviewed, including NN based robot manipulator control, NN based human robot interaction and NN based behavior recognition and generation
ART Neural Networks: Distributed Coding and ARTMAP Applications
ART (Adaptive Resonance Theory) neural networks for fast, stable learning and prediction have been applied in a variety of areas. Applications include airplane design and manufacturing, automatic target recognition, financial forecasting, machine tool monitoring, digital circuit design, chemical analysis, and robot vision. Supervised ART architectures, called ARTMAP systems, feature internal control mechanisms that create stable recognition categories of optimal size by maximizing code compression while minimizing predictive error in an on-line setting. Special-purpose requirements of various application domains have led to a number of ARTMAP variants, including fuzzy ARTMAP, ART-EMAP, Gaussian ARTMAP, and distributed ARTMAP. ARTMAP has been used for a variety of applications, including computer-assisted medical diagnosis. Medical databases present many of the challenges found in general information management settings where speed, efficiency, ease of use, and accuracy are at a premium. A direct goal of improved computer-assisted medicine is to help deliver quality emergency care in situations that may be less than ideal. Working with these problems has stimulated a number of ART architecture developments, including ARTMAP-IC [1]. This paper describes a recent collaborative effort, using a new cardiac care database for system development, has brought together medical statisticians and clinicians at the New England Medical Center with researchers developing expert systems and neural networks, in order to create a hybrid method for medical diagnosis. The paper also considers new neural network architectures, including distributed ART {dART), a real-time model of parallel distributed pattern learning that permits fast as well as slow adaptation, without catastrophic forgetting. Local synaptic computations in the dART model quantitatively match the paradoxical phenomenon of Markram-Tsodyks [2] redistribution of synaptic efficacy, as a consequence of global system hypotheses.Office of Naval Research (N00014-95-1-0409, N00014-95-1-0657
Computational intelligence approaches to robotics, automation, and control [Volume guest editors]
No abstract available
- …