Perception algorithms that provide estimates of their uncertainty are crucial
to the development of autonomous robots that can operate in challenging and
uncontrolled environments. Such perception algorithms provide the means for
having risk-aware robots that reason about the probability of successfully
completing a task when planning. There exist perception algorithms that come
with models of their uncertainty; however, these models are often developed
with assumptions, such as perfect data associations, that do not hold in the
real world. Hence the resultant estimated uncertainty is a weak lower bound. To
tackle this problem we present introspective perception - a novel approach for
predicting accurate estimates of the uncertainty of perception algorithms
deployed on mobile robots. By exploiting sensing redundancy and consistency
constraints naturally present in the data collected by a mobile robot,
introspective perception learns an empirical model of the error distribution of
perception algorithms in the deployment environment and in an autonomously
supervised manner. In this paper, we present the general theory of
introspective perception and demonstrate successful implementations for two
different perception tasks. We provide empirical results on challenging
real-robot data for introspective stereo depth estimation and introspective
visual simultaneous localization and mapping and show that they learn to
predict their uncertainty with high accuracy and leverage this information to
significantly reduce state estimation errors for an autonomous mobile robot