42,241 research outputs found

    Learning to Control in Metric Space with Optimal Regret

    Full text link
    We study online reinforcement learning for finite-horizon deterministic control systems with {\it arbitrary} state and action spaces. Suppose that the transition dynamics and reward function is unknown, but the state and action space is endowed with a metric that characterizes the proximity between different states and actions. We provide a surprisingly simple upper-confidence reinforcement learning algorithm that uses a function approximation oracle to estimate optimistic Q functions from experiences. We show that the regret of the algorithm after KK episodes is O(HL(KH)d1d)O(HL(KH)^{\frac{d-1}{d}}) where LL is a smoothness parameter, and dd is the doubling dimension of the state-action space with respect to the given metric. We also establish a near-matching regret lower bound. The proposed method can be adapted to work for more structured transition systems, including the finite-state case and the case where value functions are linear combinations of features, where the method also achieve the optimal regret

    Near-Optimal Time and Sample Complexities for Solving Discounted Markov Decision Process with a Generative Model

    Get PDF
    In this paper we consider the problem of computing an ϵ\epsilon-optimal policy of a discounted Markov Decision Process (DMDP) provided we can only access its transition function through a generative sampling model that given any state-action pair samples from the transition function in O(1)O(1) time. Given such a DMDP with states SS, actions AA, discount factor γ(0,1)\gamma\in(0,1), and rewards in range [0,1][0, 1] we provide an algorithm which computes an ϵ\epsilon-optimal policy with probability 1δ1 - \delta where \emph{both} the time spent and number of sample taken are upper bounded by O[SA(1γ)3ϵ2log(SA(1γ)δϵ)log(1(1γ)ϵ)] . O\left[\frac{|S||A|}{(1-\gamma)^3 \epsilon^2} \log \left(\frac{|S||A|}{(1-\gamma)\delta \epsilon} \right) \log\left(\frac{1}{(1-\gamma)\epsilon}\right)\right] ~. For fixed values of ϵ\epsilon, this improves upon the previous best known bounds by a factor of (1γ)1(1 - \gamma)^{-1} and matches the sample complexity lower bounds proved in Azar et al. (2013) up to logarithmic factors. We also extend our method to computing ϵ\epsilon-optimal policies for finite-horizon MDP with a generative model and provide a nearly matching sample complexity lower bound.Comment: 31 pages. Accepted to NeurIPS, 201
    corecore