3 research outputs found
A Probabilistic Framework for Imitating Human Race Driver Behavior
Understanding and modeling human driver behavior is crucial for advanced
vehicle development. However, unique driving styles, inconsistent behavior, and
complex decision processes render it a challenging task, and existing
approaches often lack variability or robustness. To approach this problem, we
propose Probabilistic Modeling of Driver behavior (ProMoD), a modular framework
which splits the task of driver behavior modeling into multiple modules. A
global target trajectory distribution is learned with Probabilistic Movement
Primitives, clothoids are utilized for local path generation, and the
corresponding choice of actions is performed by a neural network. Experiments
in a simulated car racing setting show considerable advantages in imitation
accuracy and robustness compared to other imitation learning algorithms. The
modular architecture of the proposed framework facilitates straightforward
extensibility in driving line adaptation and sequencing of multiple movement
primitives for future research.Comment: updated references [17] and [33]; added journal inf
Actor-Critic Reinforcement Learning for Control with Stability Guarantee
Reinforcement Learning (RL) and its integration with deep learning have
achieved impressive performance in various robotic control tasks, ranging from
motion planning and navigation to end-to-end visual manipulation. However,
stability is not guaranteed in model-free RL by solely using data. From a
control-theoretic perspective, stability is the most important property for any
control system, since it is closely related to safety, robustness, and
reliability of robotic systems. In this paper, we propose an actor-critic RL
framework for control which can guarantee closed-loop stability by employing
the classic Lyapunov's method in control theory. First of all, a data-based
stability theorem is proposed for stochastic nonlinear systems modeled by
Markov decision process. Then we show that the stability condition could be
exploited as the critic in the actor-critic RL to learn a controller/policy. At
last, the effectiveness of our approach is evaluated on several well-known
3-dimensional robot control tasks and a synthetic biology gene network tracking
task in three different popular physics simulation platforms. As an empirical
evaluation on the advantage of stability, we show that the learned policies can
enable the systems to recover to the equilibrium or way-points when interfered
by uncertainties such as system parametric variations and external disturbances
to a certain extent.Comment: IEEE RA-L + IROS 202