3 research outputs found
On Connections between Constrained Optimization and Reinforcement Learning
Dynamic Programming (DP) provides standard algorithms to solve Markov
Decision Processes. However, these algorithms generally do not optimize a
scalar objective function. In this paper, we draw connections between DP and
(constrained) convex optimization. Specifically, we show clear links in the
algorithmic structure between three DP schemes and optimization algorithms. We
link Conservative Policy Iteration to Frank-Wolfe, Mirror-Descent Modified
Policy Iteration to Mirror Descent, and Politex (Policy Iteration Using Expert
Prediction) to Dual Averaging. These abstract DP schemes are representative of
a number of (deep) Reinforcement Learning (RL) algorithms. By highlighting
these connections (most of which have been noticed earlier, but in a scattered
way), we would like to encourage further studies linking RL and convex
optimization, that could lead to the design of new, more efficient, and better
understood RL algorithms.Comment: Optimization Foundations of Reinforcement Learning Workshop at
NeurIPS 201
Convergence of adaptive algorithms for weakly convex constrained optimization
We analyze the adaptive first order algorithm AMSGrad, for solving a
constrained stochastic optimization problem with a weakly convex objective. We
prove the rate of convergence for the norm of
the gradient of Moreau envelope, which is the standard stationarity measure for
this class of problems. It matches the known rates that adaptive algorithms
enjoy for the specific case of unconstrained smooth stochastic optimization.
Our analysis works with mini-batch size of , constant first and second order
moment parameters, and possibly unbounded optimization domains. Finally, we
illustrate the applications and extensions of our results to specific problems
and algorithms
Model-Free Design of Stochastic LQR Controller from Reinforcement Learning and Primal-Dual Optimization Perspective
To further understand the underlying mechanism of various reinforcement
learning (RL) algorithms and also to better use the optimization theory to make
further progress in RL, many researchers begin to revisit the linear-quadratic
regulator (LQR) problem, whose setting is simple and yet captures the
characteristics of RL. Inspired by this, this work is concerned with the
model-free design of stochastic LQR controller for linear systems subject to
Gaussian noises, from the perspective of both RL and primal-dual optimization.
From the RL perspective, we first develop a new model-free off-policy policy
iteration (MF-OPPI) algorithm, in which the sampled data is repeatedly used for
updating the policy to alleviate the data-hungry problem to some extent. We
then provide a rigorous analysis for algorithm convergence by showing that the
involved iterations are equivalent to the iterations in the classical policy
iteration (PI) algorithm. From the perspective of optimization, we first
reformulate the stochastic LQR problem at hand as a constrained non-convex
optimization problem, which is shown to have strong duality. Then, to solve
this non-convex optimization problem, we propose a model-based primal-dual
(MB-PD) algorithm based on the properties of the resulting Karush-Kuhn-Tucker
(KKT) conditions. We also give a model-free implementation for the MB-PD
algorithm by solving a transformed dual feasibility condition. More
importantly, we show that the dual and primal update steps in the MB-PD
algorithm can be interpreted as the policy evaluation and policy improvement
steps in the PI algorithm, respectively. Finally, we provide one simulation
example to show the performance of the proposed algorithms