19 research outputs found
Learning Multimodal Transition Dynamics for Model-Based Reinforcement Learning
In this paper we study how to learn stochastic, multimodal transition
dynamics in reinforcement learning (RL) tasks. We focus on evaluating
transition function estimation, while we defer planning over this model to
future work. Stochasticity is a fundamental property of many task environments.
However, discriminative function approximators have difficulty estimating
multimodal stochasticity. In contrast, deep generative models do capture
complex high-dimensional outcome distributions. First we discuss why, amongst
such models, conditional variational inference (VI) is theoretically most
appealing for model-based RL. Subsequently, we compare different VI models on
their ability to learn complex stochasticity on simulated functions, as well as
on a typical RL gridworld with multimodal dynamics. Results show VI
successfully predicts multimodal outcomes, but also robustly ignores these for
deterministic parts of the transition dynamics. In summary, we show a robust
method to learn multimodal transitions using function approximation, which is a
key preliminary for model-based RL in stochastic domains.Comment: Scaling Up Reinforcement Learning (SURL) Workshop @ European Machine
Learning Conference (ECML
Bayesian Semisupervised Learning with Deep Generative Models
Neural network based generative models with discriminative components are a
powerful approach for semi-supervised learning. However, these techniques a)
cannot account for model uncertainty in the estimation of the model's
discriminative component and b) lack flexibility to capture complex stochastic
patterns in the label generation process. To avoid these problems, we first
propose to use a discriminative component with stochastic inputs for increased
noise flexibility. We show how an efficient Gibbs sampling procedure can
marginalize the stochastic inputs when inferring missing labels in this model.
Following this, we extend the discriminative component to be fully Bayesian and
produce estimates of uncertainty in its parameter values. This opens the door
for semi-supervised Bayesian active learning
Uncertainty Decomposition in Bayesian Neural Networks with Latent Variables
Bayesian neural networks (BNNs) with latent variables are probabilistic
models which can automatically identify complex stochastic patterns in the
data. We describe and study in these models a decomposition of predictive
uncertainty into its epistemic and aleatoric components. First, we show how
such a decomposition arises naturally in a Bayesian active learning scenario by
following an information theoretic approach. Second, we use a similar
decomposition to develop a novel risk sensitive objective for safe
reinforcement learning (RL). This objective minimizes the effect of model bias
in environments whose stochastic dynamics are described by BNNs with latent
variables. Our experiments illustrate the usefulness of the resulting
decomposition in active learning and safe RL settings.Comment: This article is superseded by arXiv:1710.0728
The Potential of the Return Distribution for Exploration in RL
This paper studies the potential of the return distribution for exploration
in deterministic reinforcement learning (RL) environments. We study network
losses and propagation mechanisms for Gaussian, Categorical and Gaussian
mixture distributions. Combined with exploration policies that leverage this
return distribution, we solve, for example, a randomized Chain task of length
100, which has not been reported before when learning with neural networks.Comment: Published at the Exploration in Reinforcement Learning Workshop at
the 35th International Conference on Machine Learning, Stockholm, Swede
Unsupervised Video Object Segmentation for Deep Reinforcement Learning
We present a new technique for deep reinforcement learning that automatically
detects moving objects and uses the relevant information for action selection.
The detection of moving objects is done in an unsupervised way by exploiting
structure from motion. Instead of directly learning a policy from raw images,
the agent first learns to detect and segment moving objects by exploiting flow
information in video sequences. The learned representation is then used to
focus the policy of the agent on the moving objects. Over time, the agent
identifies which objects are critical for decision making and gradually builds
a policy based on relevant moving objects. This approach, which we call
Motion-Oriented REinforcement Learning (MOREL), is demonstrated on a suite of
Atari games where the ability to detect moving objects reduces the amount of
interaction needed with the environment to obtain a good policy. Furthermore,
the resulting policy is more interpretable than policies that directly map
images to actions or values with a black box neural network. We can gain
insight into the policy by inspecting the segmentation and motion of each
object detected by the agent. This allows practitioners to confirm whether a
policy is making decisions based on sensible information
Variational Inference for Data-Efficient Model Learning in POMDPs
Partially observable Markov decision processes (POMDPs) are a powerful
abstraction for tasks that require decision making under uncertainty, and
capture a wide range of real world tasks. Today, effective planning approaches
exist that generate effective strategies given black-box models of a POMDP
task. Yet, an open question is how to acquire accurate models for complex
domains. In this paper we propose DELIP, an approach to model learning for
POMDPs that utilizes amortized structured variational inference. We empirically
show that our model leads to effective control strategies when coupled with
state-of-the-art planners. Intuitively, model-based approaches should be
particularly beneficial in environments with changing reward structures, or
where rewards are initially unknown. Our experiments confirm that DELIP is
particularly effective in this setting
ADAPT: Zero-Shot Adaptive Policy Transfer for Stochastic Dynamical Systems
Model-free policy learning has enabled robust performance of complex tasks
with relatively simple algorithms. However, this simplicity comes at the cost
of requiring an Oracle and arguably very poor sample complexity. This renders
such methods unsuitable for physical systems. Variants of model-based methods
address this problem through the use of simulators, however, this gives rise to
the problem of policy transfer from simulated to the physical system. Model
mismatch due to systematic parameter shift and unmodelled dynamics error may
cause sub-optimal or unsafe behavior upon direct transfer. We introduce the
Adaptive Policy Transfer for Stochastic Dynamics (ADAPT) algorithm that
achieves provably safe and robust, dynamically-feasible zero-shot transfer of
RL-policies to new domains with dynamics error. ADAPT combines the strengths of
offline policy learning in a black-box source simulator with online tube-based
MPC to attenuate bounded model mismatch between the source and target dynamics.
ADAPT allows online transfer of policy, trained solely in a simulation offline,
to a family of unknown targets without fine-tuning. We also formally show that
(i) ADAPT guarantees state and control safety through state-action tubes under
the assumption of Lipschitz continuity of the divergence in dynamics and, (ii)
ADAPT results in a bounded loss of reward accumulation relative to a policy
trained and evaluated in the source environment. We evaluate ADAPT on 2
continuous, non-holonomic simulated dynamical systems with 4 different
disturbance models, and find that ADAPT performs between 50%-300% better on
mean reward accrual than direct policy transfer.Comment: International Symposium on Robotics Research (ISRR), 201
Multimodal Probabilistic Model-Based Planning for Human-Robot Interaction
This paper presents a method for constructing human-robot interaction
policies in settings where multimodality, i.e., the possibility of multiple
highly distinct futures, plays a critical role in decision making. We are
motivated in this work by the example of traffic weaving, e.g., at highway
on-ramps/off-ramps, where entering and exiting cars must swap lanes in a short
distance---a challenging negotiation even for experienced drivers due to the
inherent multimodal uncertainty of who will pass whom. Our approach is to learn
multimodal probability distributions over future human actions from a dataset
of human-human exemplars and perform real-time robot policy construction in the
resulting environment model through massively parallel sampling of human
responses to candidate robot action sequences. Direct learning of these
distributions is made possible by recent advances in the theory of conditional
variational autoencoders (CVAEs), whereby we learn action distributions
simultaneously conditioned on the present interaction history, as well as
candidate future robot actions in order to take into account response dynamics.
We demonstrate the efficacy of this approach with a human-in-the-loop
simulation of a traffic weaving scenario
Robust and Efficient Transfer Learning with Hidden-Parameter Markov Decision Processes
We introduce a new formulation of the Hidden Parameter Markov Decision
Process (HiP-MDP), a framework for modeling families of related tasks using
low-dimensional latent embeddings. Our new framework correctly models the joint
uncertainty in the latent parameters and the state space. We also replace the
original Gaussian Process-based model with a Bayesian Neural Network, enabling
more scalable inference. Thus, we expand the scope of the HiP-MDP to
applications with higher dimensions and more complex dynamics.Comment: To appear at NIPS 2017, selected for an oral presentation. 17 pages
(incl references and appendix). Example code can be found at
http://github.com/dtak/hip-mdp-publi
Efficient exploration with Double Uncertain Value Networks
This paper studies directed exploration for reinforcement learning agents by
tracking uncertainty about the value of each available action. We identify two
sources of uncertainty that are relevant for exploration. The first originates
from limited data (parametric uncertainty), while the second originates from
the distribution of the returns (return uncertainty). We identify methods to
learn these distributions with deep neural networks, where we estimate
parametric uncertainty with Bayesian drop-out, while return uncertainty is
propagated through the Bellman equation as a Gaussian distribution. Then, we
identify that both can be jointly estimated in one network, which we call the
Double Uncertain Value Network. The policy is directly derived from the learned
distributions based on Thompson sampling. Experimental results show that both
types of uncertainty may vastly improve learning in domains with a strong
exploration challenge.Comment: Deep Reinforcement Learning Symposium @ Conference on Neural
Information Processing Systems (NIPS) 201