2 research outputs found
Localization of Non-Linearly Modeled Autonomous Mobile Robots Using Out-of-Sequence Measurements
This paper presents a state of the art of the estimation algorithms dealing with Out-of-Sequence (OOS) measurements for non-linearly modeled systems. The state of the art includes a critical analysis of the algorithm properties that takes into account the applicability of these techniques to autonomous mobile robot navigation based on the fusion of the measurements provided, delayed and OOS, by multiple sensors. Besides, it shows a representative example of the use of one of the most computationally efficient approaches in the localization module of the control software of a real robot (which has non-linear dynamics, and linear and non-linear sensors) and compares its performance against other approaches. The simulated results obtained with the selected OOS algorithm shows the computational requirements that each sensor of the robot imposes to it. The real experiments show how the inclusion of the selected OOS algorithm in the control software lets the robot successfully navigate in spite of receiving many OOS measurements. Finally, the comparison highlights that not only is the selected OOS algorithm among the best performing ones of the comparison, but it also has the lowest computational and memory cost
Adaptive robot body learning and estimation through predictive coding
The predictive functions that permit humans to infer their body state by
sensorimotor integration are critical to perform safe interaction in complex
environments. These functions are adaptive and robust to non-linear actuators
and noisy sensory information. This paper introduces a computational perceptual
model based on predictive processing that enables any multisensory robot to
learn, infer and update its body configuration when using arbitrary sensors
with Gaussian additive noise. The proposed method integrates different sources
of information (tactile, visual and proprioceptive) to drive the robot belief
to its current body configuration. The motivation is to enable robots with the
embodied perception needed for self-calibration and safe physical human-robot
interaction.
We formulate body learning as obtaining the forward model that encodes the
sensor values depending on the body variables, and we solve it by Gaussian
process regression. We model body estimation as minimizing the discrepancy
between the robot body configuration belief and the observed posterior. We
minimize the variational free energy using the sensory prediction errors
(sensed vs expected).
In order to evaluate the model we test it on a real multisensory robotic arm.
We show how different sensor modalities contributions, included as additive
errors, improve the refinement of the body estimation and how the system adapts
itself to provide the most plausible solution even when injecting strong
sensory visuo-tactile perturbations. We further analyse the reliability of the
model when different sensor modalities are disabled. This provides grounded
evidence about the correctness of the perceptual model and shows how the robot
estimates and adjusts its body configuration just by means of sensory
information.Comment: Accepted for IEEE International Conference on Intelligent Robots and
Systems (IROS 2018