2,748 research outputs found
Actor-Critic Reinforcement Learning for Control with Stability Guarantee
Reinforcement Learning (RL) and its integration with deep learning have
achieved impressive performance in various robotic control tasks, ranging from
motion planning and navigation to end-to-end visual manipulation. However,
stability is not guaranteed in model-free RL by solely using data. From a
control-theoretic perspective, stability is the most important property for any
control system, since it is closely related to safety, robustness, and
reliability of robotic systems. In this paper, we propose an actor-critic RL
framework for control which can guarantee closed-loop stability by employing
the classic Lyapunov's method in control theory. First of all, a data-based
stability theorem is proposed for stochastic nonlinear systems modeled by
Markov decision process. Then we show that the stability condition could be
exploited as the critic in the actor-critic RL to learn a controller/policy. At
last, the effectiveness of our approach is evaluated on several well-known
3-dimensional robot control tasks and a synthetic biology gene network tracking
task in three different popular physics simulation platforms. As an empirical
evaluation on the advantage of stability, we show that the learned policies can
enable the systems to recover to the equilibrium or way-points when interfered
by uncertainties such as system parametric variations and external disturbances
to a certain extent.Comment: IEEE RA-L + IROS 202
The Importance of Clipping in Neurocontrol by Direct Gradient Descent on the Cost-to-Go Function and in Adaptive Dynamic Programming
In adaptive dynamic programming, neurocontrol and reinforcement learning, the
objective is for an agent to learn to choose actions so as to minimise a total
cost function. In this paper we show that when discretized time is used to
model the motion of the agent, it can be very important to do "clipping" on the
motion of the agent in the final time step of the trajectory. By clipping we
mean that the final time step of the trajectory is to be truncated such that
the agent stops exactly at the first terminal state reached, and no distance
further. We demonstrate that when clipping is omitted, learning performance can
fail to reach the optimum; and when clipping is done properly, learning
performance can improve significantly.
The clipping problem we describe affects algorithms which use explicit
derivatives of the model functions of the environment to calculate a learning
gradient. These include Backpropagation Through Time for Control, and methods
based on Dual Heuristic Dynamic Programming. However the clipping problem does
not significantly affect methods based on Heuristic Dynamic Programming,
Temporal Differences or Policy Gradient Learning algorithms. Similarly, the
clipping problem does not affect fixed-length finite-horizon problems
Dynamically optimal treatment allocation using Reinforcement Learning
Devising guidance on how to assign individuals to treatment is an important
goal in empirical research. In practice, individuals often arrive sequentially,
and the planner faces various constraints such as limited budget/capacity, or
borrowing constraints, or the need to place people in a queue. For instance, a
governmental body may receive a budget outlay at the beginning of a year, and
it may need to decide how best to allocate resources within the year to
individuals who arrive sequentially. In this and other examples involving
inter-temporal trade-offs, previous work on devising optimal policy rules in a
static context is either not applicable, or sub-optimal. Here we show how one
can use offline observational data to estimate an optimal policy rule that
maximizes expected welfare in this dynamic context. We allow the class of
policy rules to be restricted for legal, ethical or incentive compatibility
reasons. The problem is equivalent to one of optimal control under a
constrained policy class, and we exploit recent developments in Reinforcement
Learning (RL) to propose an algorithm to solve this. The algorithm is easily
implementable with speedups achieved through multiple RL agents learning in
parallel processes. We also characterize the statistical regret from using our
estimated policy rule by casting the evolution of the value function under each
policy in a Partial Differential Equation (PDE) form and using the theory of
viscosity solutions to PDEs. We find that the policy regret decays at a
rate in most examples; this is the same rate as in the static case.Comment: 67 page
- β¦