2 research outputs found
Regret Bounds for Reinforcement Learning via Markov Chain Concentration
We give a simple optimistic algorithm for which it is easy to derive regret
bounds of after steps in uniformly
ergodic Markov decision processes with states, actions, and mixing time
parameter . These bounds are the first regret bounds in the
general, non-episodic setting with an optimal dependence on all given
parameters. They could only be improved by using an alternative mixing time
parameter
Linear dependence of stationary distributions in ergodic Markov decision processes
In ergodic MDPs we consider stationary distributions of policies that coincide in all but n states, in which one of two possible actions is chosen. We give conditions and formulas for linear dependence of the stationary distributions of n + 2 such policies, and show some results about combinations and mixtures of policies