6 research outputs found
Introspective Perception for Mobile Robots
Perception algorithms that provide estimates of their uncertainty are crucial
to the development of autonomous robots that can operate in challenging and
uncontrolled environments. Such perception algorithms provide the means for
having risk-aware robots that reason about the probability of successfully
completing a task when planning. There exist perception algorithms that come
with models of their uncertainty; however, these models are often developed
with assumptions, such as perfect data associations, that do not hold in the
real world. Hence the resultant estimated uncertainty is a weak lower bound. To
tackle this problem we present introspective perception - a novel approach for
predicting accurate estimates of the uncertainty of perception algorithms
deployed on mobile robots. By exploiting sensing redundancy and consistency
constraints naturally present in the data collected by a mobile robot,
introspective perception learns an empirical model of the error distribution of
perception algorithms in the deployment environment and in an autonomously
supervised manner. In this paper, we present the general theory of
introspective perception and demonstrate successful implementations for two
different perception tasks. We provide empirical results on challenging
real-robot data for introspective stereo depth estimation and introspective
visual simultaneous localization and mapping and show that they learn to
predict their uncertainty with high accuracy and leverage this information to
significantly reduce state estimation errors for an autonomous mobile robot
Towards autonomous sensor and actuator model induction on a mobile robot
1 This article presents a novel methodology for a robot to autonomously induce models of its actions and sensors called asami (Autonomous Sensor and Actuator Model Induction). While previous approaches to model learning rely on an independent source of training data, we show how a robot can induce action and sensor models without any well-calibrated feedback. Specif-ically, the only inputs to the asami learning process are the data the robot would naturally have access to: its raw sensations and knowledge of its own action selections. From the per-spective of developmental robotics, our robot’s goal is to obtain self-consistent internal models, rather than to perform any externally defined tasks. Furthermore, the target function of each model-learning process comes from within the system, namely the most current version of an-other internal system model. Concretely realizing this model-learning methodology presents a number of challenges, and we introduce a broad class of settings in which solutions to these challenges are presented. asami is fully implemented and tested, and empirical results validate our approach in a robotic testbed domain using a Sony Aibo ERS-7 robot