1,474 research outputs found
Near-Optimal BRL using Optimistic Local Transitions
Model-based Bayesian Reinforcement Learning (BRL) allows a found
formalization of the problem of acting optimally while facing an unknown
environment, i.e., avoiding the exploration-exploitation dilemma. However,
algorithms explicitly addressing BRL suffer from such a combinatorial explosion
that a large body of work relies on heuristic algorithms. This paper introduces
BOLT, a simple and (almost) deterministic heuristic algorithm for BRL which is
optimistic about the transition function. We analyze BOLT's sample complexity,
and show that under certain parameters, the algorithm is near-optimal in the
Bayesian sense with high probability. Then, experimental results highlight the
key differences of this method compared to previous work.Comment: ICML201
PAC-Bayesian Analysis of the Exploration-Exploitation Trade-off
We develop a coherent framework for integrative simultaneous analysis of the
exploration-exploitation and model order selection trade-offs. We improve over
our preceding results on the same subject (Seldin et al., 2011) by combining
PAC-Bayesian analysis with Bernstein-type inequality for martingales. Such a
combination is also of independent interest for studies of multiple
simultaneously evolving martingales.Comment: On-line Trading of Exploration and Exploitation 2 - ICML-2011
workshop. http://explo.cs.ucl.ac.uk/workshop
PAC-Bayesian Analysis of Martingales and Multiarmed Bandits
We present two alternative ways to apply PAC-Bayesian analysis to sequences
of dependent random variables. The first is based on a new lemma that enables
to bound expectations of convex functions of certain dependent random variables
by expectations of the same functions of independent Bernoulli random
variables. This lemma provides an alternative tool to Hoeffding-Azuma
inequality to bound concentration of martingale values. Our second approach is
based on integration of Hoeffding-Azuma inequality with PAC-Bayesian analysis.
We also introduce a way to apply PAC-Bayesian analysis in situation of limited
feedback. We combine the new tools to derive PAC-Bayesian generalization and
regret bounds for the multiarmed bandit problem. Although our regret bound is
not yet as tight as state-of-the-art regret bounds based on other
well-established techniques, our results significantly expand the range of
potential applications of PAC-Bayesian analysis and introduce a new analysis
tool to reinforcement learning and many other fields, where martingales and
limited feedback are encountered
Bayesian Reinforcement Learning via Deep, Sparse Sampling
We address the problem of Bayesian reinforcement learning using efficient
model-based online planning. We propose an optimism-free Bayes-adaptive
algorithm to induce deeper and sparser exploration with a theoretical bound on
its performance relative to the Bayes optimal policy, with a lower
computational complexity. The main novelty is the use of a candidate policy
generator, to generate long-term options in the planning tree (over beliefs),
which allows us to create much sparser and deeper trees. Experimental results
on different environments show that in comparison to the state-of-the-art, our
algorithm is both computationally more efficient, and obtains significantly
higher reward in discrete environments.Comment: Published in AISTATS 202
PAC-Bayesian Soft Actor-Critic Learning
Actor-critic algorithms address the dual goals of reinforcement learning
(RL), policy evaluation and improvement, via two separate function
approximators. The practicality of this approach comes at the expense of
training instability, caused mainly by the destructive effect of the
approximation errors of the critic on the actor. We tackle this bottleneck by
employing an existing Probably Approximately Correct (PAC) Bayesian bound for
the first time as the critic training objective of the Soft Actor-Critic (SAC)
algorithm. We further demonstrate that online learning performance improves
significantly when a stochastic actor explores multiple futures by
critic-guided random search. We observe our resulting algorithm to compare
favorably to the state of the art on multiple classical control and locomotion
tasks in terms of both sample efficiency and regret minimization.Comment: 16 pages, 2 figure
- …