6,909 research outputs found
Combining Subgoal Graphs with Reinforcement Learning to Build a Rational Pathfinder
In this paper, we present a hierarchical path planning framework called SG-RL
(subgoal graphs-reinforcement learning), to plan rational paths for agents
maneuvering in continuous and uncertain environments. By "rational", we mean
(1) efficient path planning to eliminate first-move lags; (2) collision-free
and smooth for agents with kinematic constraints satisfied. SG-RL works in a
two-level manner. At the first level, SG-RL uses a geometric path-planning
method, i.e., Simple Subgoal Graphs (SSG), to efficiently find optimal abstract
paths, also called subgoal sequences. At the second level, SG-RL uses an RL
method, i.e., Least-Squares Policy Iteration (LSPI), to learn near-optimal
motion-planning policies which can generate kinematically feasible and
collision-free trajectories between adjacent subgoals. The first advantage of
the proposed method is that SSG can solve the limitations of sparse reward and
local minima trap for RL agents; thus, LSPI can be used to generate paths in
complex environments. The second advantage is that, when the environment
changes slightly (i.e., unexpected obstacles appearing), SG-RL does not need to
reconstruct subgoal graphs and replan subgoal sequences using SSG, since LSPI
can deal with uncertainties by exploiting its generalization ability to handle
changes in environments. Simulation experiments in representative scenarios
demonstrate that, compared with existing methods, SG-RL can work well on
large-scale maps with relatively low action-switching frequencies and shorter
path lengths, and SG-RL can deal with small changes in environments. We further
demonstrate that the design of reward functions and the types of training
environments are important factors for learning feasible policies.Comment: 20 page
Intrinsic Motivation and Mental Replay enable Efficient Online Adaptation in Stochastic Recurrent Networks
Autonomous robots need to interact with unknown, unstructured and changing
environments, constantly facing novel challenges. Therefore, continuous online
adaptation for lifelong-learning and the need of sample-efficient mechanisms to
adapt to changes in the environment, the constraints, the tasks, or the robot
itself are crucial. In this work, we propose a novel framework for
probabilistic online motion planning with online adaptation based on a
bio-inspired stochastic recurrent neural network. By using learning signals
which mimic the intrinsic motivation signalcognitive dissonance in addition
with a mental replay strategy to intensify experiences, the stochastic
recurrent network can learn from few physical interactions and adapts to novel
environments in seconds. We evaluate our online planning and adaptation
framework on an anthropomorphic KUKA LWR arm. The rapid online adaptation is
shown by learning unknown workspace constraints sample-efficiently from few
physical interactions while following given way points.Comment: accepted in Neural Network
Learning with Training Wheels: Speeding up Training with a Simple Controller for Deep Reinforcement Learning
Deep Reinforcement Learning (DRL) has been applied successfully to many
robotic applications. However, the large number of trials needed for training
is a key issue. Most of existing techniques developed to improve training
efficiency (e.g. imitation) target on general tasks rather than being tailored
for robot applications, which have their specific context to benefit from. We
propose a novel framework, Assisted Reinforcement Learning, where a classical
controller (e.g. a PID controller) is used as an alternative, switchable policy
to speed up training of DRL for local planning and navigation problems. The
core idea is that the simple control law allows the robot to rapidly learn
sensible primitives, like driving in a straight line, instead of random
exploration. As the actor network becomes more advanced, it can then take over
to perform more complex actions, like obstacle avoidance. Eventually, the
simple controller can be discarded entirely. We show that not only does this
technique train faster, it also is less sensitive to the structure of the DRL
network and consistently outperforms a standard Deep Deterministic Policy
Gradient network. We demonstrate the results in both simulation and real-world
experiments.Comment: Published in ICRA2018. The code is now available at
https://github.com/xie9187/AsDDP
- …