3,557 research outputs found
Deep imitation learning for 3D navigation tasks
Deep learning techniques have shown success in learning from raw
high dimensional data in various applications. While deep reinforcement learning is recently gaining popularity as a method to train intelligent agents, utilizing deep learning in imitation learning has been scarcely explored. Imitation learning can be an efficient method to teach intelligent agents by providing a set of demonstrations to learn from. However, generalizing to situations that are not represented in the demonstrations can be challenging, especially in 3D environments. In this paper, we propose a deep imitation learning method to learn navigation tasks from demonstrations in a 3D environment. The supervised policy is refined using active learning in order to generalize to unseen situations. This approach is compared to two popular deep reinforcement learning techniques: Deep-Q-networks (DQN) and Asynchronous actor critic (A3C). The proposed method as well as the reinforcement learning methods employ deep convolutional neural networks and learn directly from raw visual input. Methods for combining learning from demonstrations and experience are also investigated. This combination aims to join the generalization ability of learning by experience with the efficiency of learning by imitation. The proposed methods are evaluated on 4 navigation tasks in a 3D simulated environment. Navigation tasks are a typical problem that is relevant to many real applications. They pose the challenge of requiring demonstrations of long trajectories to reach the target and only providing delayed rewards (usually terminal) to the agent. The experiments show that the proposed method can successfully learn navigation tasks from raw visual input while learning from experience methods fail to learn an e�ective policy. Moreover, it is shown that active learning can significantly improve the performance of the initially learned policy using a small number of active samples
Actor-Critic Reinforcement Learning for Control with Stability Guarantee
Reinforcement Learning (RL) and its integration with deep learning have
achieved impressive performance in various robotic control tasks, ranging from
motion planning and navigation to end-to-end visual manipulation. However,
stability is not guaranteed in model-free RL by solely using data. From a
control-theoretic perspective, stability is the most important property for any
control system, since it is closely related to safety, robustness, and
reliability of robotic systems. In this paper, we propose an actor-critic RL
framework for control which can guarantee closed-loop stability by employing
the classic Lyapunov's method in control theory. First of all, a data-based
stability theorem is proposed for stochastic nonlinear systems modeled by
Markov decision process. Then we show that the stability condition could be
exploited as the critic in the actor-critic RL to learn a controller/policy. At
last, the effectiveness of our approach is evaluated on several well-known
3-dimensional robot control tasks and a synthetic biology gene network tracking
task in three different popular physics simulation platforms. As an empirical
evaluation on the advantage of stability, we show that the learned policies can
enable the systems to recover to the equilibrium or way-points when interfered
by uncertainties such as system parametric variations and external disturbances
to a certain extent.Comment: IEEE RA-L + IROS 202
Learning with Training Wheels: Speeding up Training with a Simple Controller for Deep Reinforcement Learning
Deep Reinforcement Learning (DRL) has been applied successfully to many
robotic applications. However, the large number of trials needed for training
is a key issue. Most of existing techniques developed to improve training
efficiency (e.g. imitation) target on general tasks rather than being tailored
for robot applications, which have their specific context to benefit from. We
propose a novel framework, Assisted Reinforcement Learning, where a classical
controller (e.g. a PID controller) is used as an alternative, switchable policy
to speed up training of DRL for local planning and navigation problems. The
core idea is that the simple control law allows the robot to rapidly learn
sensible primitives, like driving in a straight line, instead of random
exploration. As the actor network becomes more advanced, it can then take over
to perform more complex actions, like obstacle avoidance. Eventually, the
simple controller can be discarded entirely. We show that not only does this
technique train faster, it also is less sensitive to the structure of the DRL
network and consistently outperforms a standard Deep Deterministic Policy
Gradient network. We demonstrate the results in both simulation and real-world
experiments.Comment: Published in ICRA2018. The code is now available at
https://github.com/xie9187/AsDDP
- …