13,864 research outputs found

    Deep Reinforcement Learning for Complete Coverage Path Planning in Unknown Environments

    Get PDF
    Mobile robots must operate autonomously, often in unknown and unstructured environments. To achieve this objective, a robot must be able to correctly perceive its environment, plan its path, and move around safely, without human supervision. Navigation from an initial position to a target lo- cation has been a challenging problem in robotics. This work examined the particular navigation task requiring complete coverage planning in outdoor environments. A motion planner based on Deep Reinforcement Learning is proposed where a Deep Q-network is trained to learn a control policy to approximate the optimal strategy, using a dynamic map of the environment. In addition to this path planning algorithm, a computer vision system is presented as a way to capture the images of a stereo camera embedded on the robot, detect obstacles and update the workspace map. Simulation results show that the algorithm generalizes well to different types of environments. After multiple sequences of training of the Reinforcement Learning agent, the virtual mobile robot is able to cover the whole space with a coverage rate of over 80% on average, starting from a varying initial position, while avoiding obstacles by using relying on local sensory information. The experiments also demonstrate that the DQN agent was able to better perform the coverage when compared to a human

    Lifelong Federated Reinforcement Learning: A Learning Architecture for Navigation in Cloud Robotic Systems

    Full text link
    This paper was motivated by the problem of how to make robots fuse and transfer their experience so that they can effectively use prior knowledge and quickly adapt to new environments. To address the problem, we present a learning architecture for navigation in cloud robotic systems: Lifelong Federated Reinforcement Learning (LFRL). In the work, We propose a knowledge fusion algorithm for upgrading a shared model deployed on the cloud. Then, effective transfer learning methods in LFRL are introduced. LFRL is consistent with human cognitive science and fits well in cloud robotic systems. Experiments show that LFRL greatly improves the efficiency of reinforcement learning for robot navigation. The cloud robotic system deployment also shows that LFRL is capable of fusing prior knowledge. In addition, we release a cloud robotic navigation-learning website based on LFRL

    Socially Aware Motion Planning with Deep Reinforcement Learning

    Full text link
    For robotic vehicles to navigate safely and efficiently in pedestrian-rich environments, it is important to model subtle human behaviors and navigation rules (e.g., passing on the right). However, while instinctive to humans, socially compliant navigation is still difficult to quantify due to the stochasticity in people's behaviors. Existing works are mostly focused on using feature-matching techniques to describe and imitate human paths, but often do not generalize well since the feature values can vary from person to person, and even run to run. This work notes that while it is challenging to directly specify the details of what to do (precise mechanisms of human navigation), it is straightforward to specify what not to do (violations of social norms). Specifically, using deep reinforcement learning, this work develops a time-efficient navigation policy that respects common social norms. The proposed method is shown to enable fully autonomous navigation of a robotic vehicle moving at human walking speed in an environment with many pedestrians.Comment: 8 page
    • …
    corecore