Agents trained in simulation may make errors in the real world due to
mismatches between training and execution environments. These mistakes can be
dangerous and difficult to discover because the agent cannot predict them a
priori. We propose using oracle feedback to learn a predictive model of these
blind spots to reduce costly errors in real-world applications. We focus on
blind spots in reinforcement learning (RL) that occur due to incomplete state
representation: The agent does not have the appropriate features to represent
the true state of the world and thus cannot distinguish among numerous states.
We formalize the problem of discovering blind spots in RL as a noisy supervised
learning problem with class imbalance. We learn models to predict blind spots
in unseen regions of the state space by combining techniques for label
aggregation, calibration, and supervised learning. The models take into
consideration noise emerging from different forms of oracle feedback, including
demonstrations and corrections. We evaluate our approach on two domains and
show that it achieves higher predictive performance than baseline methods, and
that the learned model can be used to selectively query an oracle at execution
time to prevent errors. We also empirically analyze the biases of various
feedback types and how they influence the discovery of blind spots.Comment: To appear at AAMAS 201