22 research outputs found

    Towards Monocular Vision based Obstacle Avoidance through Deep Reinforcement Learning

    Full text link
    Obstacle avoidance is a fundamental requirement for autonomous robots which operate in, and interact with, the real world. When perception is limited to monocular vision avoiding collision becomes significantly more challenging due to the lack of 3D information. Conventional path planners for obstacle avoidance require tuning a number of parameters and do not have the ability to directly benefit from large datasets and continuous use. In this paper, a dueling architecture based deep double-Q network (D3QN) is proposed for obstacle avoidance, using only monocular RGB vision. Based on the dueling and double-Q mechanisms, D3QN can efficiently learn how to avoid obstacles in a simulator even with very noisy depth information predicted from RGB image. Extensive experiments show that D3QN enables twofold acceleration on learning compared with a normal deep Q network and the models trained solely in virtual environments can be directly transferred to real robots, generalizing well to various new environments with previously unseen dynamic objects.Comment: Accepted by RSS 2017 workshop New Frontiers for Deep Learning in Robotic

    Learning with Training Wheels: Speeding up Training with a Simple Controller for Deep Reinforcement Learning

    Get PDF
    Deep Reinforcement Learning (DRL) has been applied successfully to many robotic applications. However, the large number of trials needed for training is a key issue. Most of existing techniques developed to improve training efficiency (e.g. imitation) target on general tasks rather than being tailored for robot applications, which have their specific context to benefit from. We propose a novel framework, Assisted Reinforcement Learning, where a classical controller (e.g. a PID controller) is used as an alternative, switchable policy to speed up training of DRL for local planning and navigation problems. The core idea is that the simple control law allows the robot to rapidly learn sensible primitives, like driving in a straight line, instead of random exploration. As the actor network becomes more advanced, it can then take over to perform more complex actions, like obstacle avoidance. Eventually, the simple controller can be discarded entirely. We show that not only does this technique train faster, it also is less sensitive to the structure of the DRL network and consistently outperforms a standard Deep Deterministic Policy Gradient network. We demonstrate the results in both simulation and real-world experiments.Comment: Published in ICRA2018. The code is now available at https://github.com/xie9187/AsDDP

    An implementation of vision based deep reinforcement learning for humanoid robot locomotion

    Get PDF
    Deep reinforcement learning (DRL) exhibits a promising approach for controlling humanoid robot locomotion. However, only values relating sensors such as IMU, gyroscope, and GPS are not sufficient robots to learn their locomotion skills. In this article, we aim to show the success of vision based DRL. We propose a new vision based deep reinforcement learning algorithm for the locomotion of the Robotis-op2 humanoid robot for the first time. In experimental setup, we construct the locomotion of humanoid robot in a specific environment in the Webots software. We use Double Dueling Q Networks (D3QN) and Deep Q Networks (DQN) that are a kind of reinforcement learning algorithm. We present the performance of vision based DRL algorithm on a locomotion experiment. The experimental results show that D3QN is better than DQN in that stable locomotion and fast training and the vision based DRL algorithms will be successfully able to use at the other complex environments and applications.TÜBİTAK ve NVIDI

    An implementation of vision based deep reinforcement learning for humanoid robot locomotion

    Get PDF
    An implementation of vision based deep reinforcement learning for humanoid robot locomotionTÜBİTAK ve NVIDI
    corecore