1,796 research outputs found
Actor-Critic Reinforcement Learning for Control with Stability Guarantee
Reinforcement Learning (RL) and its integration with deep learning have
achieved impressive performance in various robotic control tasks, ranging from
motion planning and navigation to end-to-end visual manipulation. However,
stability is not guaranteed in model-free RL by solely using data. From a
control-theoretic perspective, stability is the most important property for any
control system, since it is closely related to safety, robustness, and
reliability of robotic systems. In this paper, we propose an actor-critic RL
framework for control which can guarantee closed-loop stability by employing
the classic Lyapunov's method in control theory. First of all, a data-based
stability theorem is proposed for stochastic nonlinear systems modeled by
Markov decision process. Then we show that the stability condition could be
exploited as the critic in the actor-critic RL to learn a controller/policy. At
last, the effectiveness of our approach is evaluated on several well-known
3-dimensional robot control tasks and a synthetic biology gene network tracking
task in three different popular physics simulation platforms. As an empirical
evaluation on the advantage of stability, we show that the learned policies can
enable the systems to recover to the equilibrium or way-points when interfered
by uncertainties such as system parametric variations and external disturbances
to a certain extent.Comment: IEEE RA-L + IROS 202
Automating Vehicles by Deep Reinforcement Learning using Task Separation with Hill Climbing
Within the context of autonomous driving a model-based reinforcement learning
algorithm is proposed for the design of neural network-parameterized
controllers. Classical model-based control methods, which include sampling- and
lattice-based algorithms and model predictive control, suffer from the
trade-off between model complexity and computational burden required for the
online solution of expensive optimization or search problems at every short
sampling time. To circumvent this trade-off, a 2-step procedure is motivated:
first learning of a controller during offline training based on an arbitrarily
complicated mathematical system model, before online fast feedforward
evaluation of the trained controller. The contribution of this paper is the
proposition of a simple gradient-free and model-based algorithm for deep
reinforcement learning using task separation with hill climbing (TSHC). In
particular, (i) simultaneous training on separate deterministic tasks with the
purpose of encoding many motion primitives in a neural network, and (ii) the
employment of maximally sparse rewards in combination with virtual velocity
constraints (VVCs) in setpoint proximity are advocated.Comment: 10 pages, 6 figures, 1 tabl
Lyapunov-Barrier Characterization of Robust Reach-Avoid-Stay Specifications for Hybrid Systems
Stability, reachability, and safety are crucial properties of dynamical
systems. While verification and control synthesis of reach-avoid-stay
objectives can be effectively handled by abstraction-based formal methods, such
approaches can be computationally expensive due to the use of state-space
discretization. In contrast, Lyapunov methods qualitatively characterize
stability and safety properties without any state-space discretization. Recent
work on converse Lyapunov-barrier theorems also demonstrates an approximate
completeness or verifying reach-avoid-stay specifications of systems modelled
by nonlinear differential equations. In this paper, based on the topology of
hybrid arcs, we extend the Lyapunov-barrier characterization to more general
hybrid systems described by differential and difference inclusions. We show
that Lyapunov-barrier functions are not only sufficient to guarantee
reach-avoid-stay specifications for well-posed hybrid systems, but also
necessary for arbitrarily slightly perturbed systems under mild conditions.
Numerical examples are provided to illustrate the main results
- …