520 research outputs found
Discovering Blind Spots in Reinforcement Learning
Agents trained in simulation may make errors in the real world due to
mismatches between training and execution environments. These mistakes can be
dangerous and difficult to discover because the agent cannot predict them a
priori. We propose using oracle feedback to learn a predictive model of these
blind spots to reduce costly errors in real-world applications. We focus on
blind spots in reinforcement learning (RL) that occur due to incomplete state
representation: The agent does not have the appropriate features to represent
the true state of the world and thus cannot distinguish among numerous states.
We formalize the problem of discovering blind spots in RL as a noisy supervised
learning problem with class imbalance. We learn models to predict blind spots
in unseen regions of the state space by combining techniques for label
aggregation, calibration, and supervised learning. The models take into
consideration noise emerging from different forms of oracle feedback, including
demonstrations and corrections. We evaluate our approach on two domains and
show that it achieves higher predictive performance than baseline methods, and
that the learned model can be used to selectively query an oracle at execution
time to prevent errors. We also empirically analyze the biases of various
feedback types and how they influence the discovery of blind spots.Comment: To appear at AAMAS 201
A Wholistic View of Continual Learning with Deep Neural Networks: Forgotten Lessons and the Bridge to Active and Open World Learning
Current deep learning research is dominated by benchmark evaluation. A method
is regarded as favorable if it empirically performs well on the dedicated test
set. This mentality is seamlessly reflected in the resurfacing area of
continual learning, where consecutively arriving sets of benchmark data are
investigated. The core challenge is framed as protecting previously acquired
representations from being catastrophically forgotten due to the iterative
parameter updates. However, comparison of individual methods is nevertheless
treated in isolation from real world application and typically judged by
monitoring accumulated test set performance. The closed world assumption
remains predominant. It is assumed that during deployment a model is guaranteed
to encounter data that stems from the same distribution as used for training.
This poses a massive challenge as neural networks are well known to provide
overconfident false predictions on unknown instances and break down in the face
of corrupted data. In this work we argue that notable lessons from open set
recognition, the identification of statistically deviating data outside of the
observed dataset, and the adjacent field of active learning, where data is
incrementally queried such that the expected performance gain is maximized, are
frequently overlooked in the deep learning era. Based on these forgotten
lessons, we propose a consolidated view to bridge continual learning, active
learning and open set recognition in deep neural networks. Our results show
that this not only benefits each individual paradigm, but highlights the
natural synergies in a common framework. We empirically demonstrate
improvements when alleviating catastrophic forgetting, querying data in active
learning, selecting task orders, while exhibiting robust open world application
where previously proposed methods fail.Comment: 32 page
Efficient High-Dimensional Inference in the Multiple Measurement Vector Problem
In this work, a Bayesian approximate message passing algorithm is proposed
for solving the multiple measurement vector (MMV) problem in compressive
sensing, in which a collection of sparse signal vectors that share a common
support are recovered from undersampled noisy measurements. The algorithm,
AMP-MMV, is capable of exploiting temporal correlations in the amplitudes of
non-zero coefficients, and provides soft estimates of the signal vectors as
well as the underlying support. Central to the proposed approach is an
extension of recently developed approximate message passing techniques to the
amplitude-correlated MMV setting. Aided by these techniques, AMP-MMV offers a
computational complexity that is linear in all problem dimensions. In order to
allow for automatic parameter tuning, an expectation-maximization algorithm
that complements AMP-MMV is described. Finally, a detailed numerical study
demonstrates the power of the proposed approach and its particular suitability
for application to high-dimensional problems.Comment: 28 pages, 9 figure
- …