575 research outputs found

    A Theory of Regularized Markov Decision Processes

    Full text link
    Many recent successful (deep) reinforcement learning algorithms make use of regularization, generally based on entropy or Kullback-Leibler divergence. We propose a general theory of regularized Markov Decision Processes that generalizes these approaches in two directions: we consider a larger class of regularizers, and we consider the general modified policy iteration approach, encompassing both policy iteration and value iteration. The core building blocks of this theory are a notion of regularized Bellman operator and the Legendre-Fenchel transform, a classical tool of convex optimization. This approach allows for error propagation analyses of general algorithmic schemes of which (possibly variants of) classical algorithms such as Trust Region Policy Optimization, Soft Q-learning, Stochastic Actor Critic or Dynamic Policy Programming are special cases. This also draws connections to proximal convex optimization, especially to Mirror Descent.Comment: ICML 201

    Optimal transport over a linear dynamical system

    Get PDF
    We consider the problem of steering an initial probability density for the state vector of a linear system to a final one, in finite time, using minimum energy control. In the case where the dynamics correspond to an integrator (xË™(t)=u(t)\dot x(t) = u(t)) this amounts to a Monge-Kantorovich Optimal Mass Transport (OMT) problem. In general, we show that the problem can again be reduced to solving an OMT problem and that it has a unique solution. In parallel, we study the optimal steering of the state-density of a linear stochastic system with white noise disturbance; this is known to correspond to a Schroedinger bridge. As the white noise intensity tends to zero, the flow of densities converges to that of the deterministic dynamics and can serve as a way to compute the solution of its deterministic counterpart. The solution can be expressed in closed-form for Gaussian initial and final state densities in both cases

    Convergence of policy gradient for entropy regularized MDPs with neural network approximation in the mean-field regime

    Get PDF
    We study the global convergence of policy gradient for infinite-horizon, continuous state and action space, entropy-regularized Markov decision processes (MDPs). We consider a softmax policy with (one-hidden layer) neural network approximation in a mean-field regime. Additional entropic regularization in the associated mean-field probability measure is added, and the corresponding gradient flow is studied in the 2-Wasserstein metric. We show that the objective function is increasing along the gradient flow. Further, we prove that if the regularization in terms of the mean-field measure is sufficient, the gradient flow converges exponentially fast to the unique stationary solution, which is the unique maximizer of the regularized MDP objective. Lastly, we study the sensitivity of the value function along the gradient flow with respect to regularization parameters and the initial condition. Our results rely on the careful analysis of non-linear Fokker--Planck--Kolmogorov equation and extend the pioneering work of Mei et al. 2020 and Agarwal et al. 2020, which quantify the global convergence rate of policy gradient for entropy-regularized MDPs in the tabular setting

    Entropic and displacement interpolation: a computational approach using the Hilbert metric

    Get PDF
    Monge-Kantorovich optimal mass transport (OMT) provides a blueprint for geometries in the space of positive densities -- it quantifies the cost of transporting a mass distribution into another. In particular, it provides natural options for interpolation of distributions (displacement interpolation) and for modeling flows. As such it has been the cornerstone of recent developments in physics, probability theory, image processing, time-series analysis, and several other fields. In spite of extensive work and theoretical developments, the computation of OMT for large scale problems has remained a challenging task. An alternative framework for interpolating distributions, rooted in statistical mechanics and large deviations, is that of Schroedinger bridges (entropic interpolation). This may be seen as a stochastic regularization of OMT and can be cast as the stochastic control problem of steering the probability density of the state-vector of a dynamical system between two marginals. In this approach, however, the actual computation of flows had hardly received any attention. In recent work on Schroedinger bridges for Markov chains and quantum evolutions, we noted that the solution can be efficiently obtained from the fixed-point of a map which is contractive in the Hilbert metric. Thus, the purpose of this paper is to show that a similar approach can be taken in the context of diffusion processes which i) leads to a new proof of a classical result on Schroedinger bridges and ii) provides an efficient computational scheme for both, Schroedinger bridges and OMT. We illustrate this new computational approach by obtaining interpolation of densities in representative examples such as interpolation of images.Comment: 20 pages, 7 figure
    • …
    corecore