Local Planners with Deep Reinforcement Learning for Indoor Autonomous Navigation

Abstract

Autonomous indoor navigation requires an elab- orated and accurate algorithmic stack, able to guide robots through cluttered, unstructured, and dynamic environments. Global and local path planning, mapping, localization, and decision making are only some of the required layers that undergo heavy research from the scientific community to achieve the requirements for fully functional autonomous navigation. In the last years, Deep Reinforcement Learning (DRL) has proven to be a competitive short-range guidance system solution for power-efficient and low computational cost point-to-point local planners. One of the main strengths of this approach is the possibility to train a DRL agent in a simulated environment that encapsulates robot dynamics and task constraints and then deploy its learned point-to-point navigation policy in a real setting. However, despite DRL easily integrates complex mechanical dynamics and multimodal signals into a single model, the effect of different sensor data on navigation performance has not been investigated yet. In this paper, we compare two different DRL navigation solutions that leverage LiDAR and depth camera information, respectively. The agents are trained in the same simulated environment and tested on a common benchmark to highlight the strengths and criticalities of each technique

    Similar works