1,136 research outputs found
Near Optimal Exploration-Exploitation in Non-Communicating Markov Decision Processes
While designing the state space of an MDP, it is common to include states
that are transient or not reachable by any policy (e.g., in mountain car, the
product space of speed and position contains configurations that are not
physically reachable). This leads to defining weakly-communicating or
multi-chain MDPs. In this paper, we introduce \tucrl, the first algorithm able
to perform efficient exploration-exploitation in any finite Markov Decision
Process (MDP) without requiring any form of prior knowledge. In particular, for
any MDP with communicating states, actions and
possible communicating next states,
we derive a regret bound, where is the diameter
(i.e., the longest shortest path) of the communicating part of the MDP. This is
in contrast with optimistic algorithms (e.g., UCRL, Optimistic PSRL) that
suffer linear regret in weakly-communicating MDPs, as well as posterior
sampling or regularised algorithms (e.g., REGAL), which require prior knowledge
on the bias span of the optimal policy to bias the exploration to achieve
sub-linear regret. We also prove that in weakly-communicating MDPs, no
algorithm can ever achieve a logarithmic growth of the regret without first
suffering a linear regret for a number of steps that is exponential in the
parameters of the MDP. Finally, we report numerical simulations supporting our
theoretical findings and showing how TUCRL overcomes the limitations of the
state-of-the-art
Chasing Ghosts: Competing with Stateful Policies
We consider sequential decision making in a setting where regret is measured
with respect to a set of stateful reference policies, and feedback is limited
to observing the rewards of the actions performed (the so called "bandit"
setting). If either the reference policies are stateless rather than stateful,
or the feedback includes the rewards of all actions (the so called "expert"
setting), previous work shows that the optimal regret grows like
in terms of the number of decision rounds .
The difficulty in our setting is that the decision maker unavoidably loses
track of the internal states of the reference policies, and thus cannot
reliably attribute rewards observed in a certain round to any of the reference
policies. In fact, in this setting it is impossible for the algorithm to
estimate which policy gives the highest (or even approximately highest) total
reward. Nevertheless, we design an algorithm that achieves expected regret that
is sublinear in , of the form . Our algorithm is based
on a certain local repetition lemma that may be of independent interest. We
also show that no algorithm can guarantee expected regret better than
- …