16,555 research outputs found

    Mobile robot obstacle detection and avoidance with NAV-YOLO

    Get PDF
    Intelligent robotics is gaining significance in Maintenance, Repair, and Overhaul (MRO) hangar operations, where mobile robots navigate complex and dynamic environments for Aircraft visual inspection. Aircraft hangars are usually busy and changing, with objects of varying shapes and sizes presenting harsh obstacles and conditions that can lead to potential collisions and safety hazards. This makes Obstacle detection and avoidance critical for safe and efficient robot navigation tasks. Conventional methods have been applied with computational issues, while learning-based approaches are limited in detection accuracy. This paper proposes a vision-based navigation model that integrates a pre-trained Yolov5 object detection model into a Robot Operating System (ROS) navigation stack to optimise obstacle detection and avoidance in a complex environment. The experiment is validated and evaluated in ROS-Gazebo simulation and turtlebot3 waffle-pi robot platform. The results showed that the robot can increasingly detect and avoid obstacles without colliding while navigating through different checkpoints to the target location

    Towards Monocular Vision based Obstacle Avoidance through Deep Reinforcement Learning

    Full text link
    Obstacle avoidance is a fundamental requirement for autonomous robots which operate in, and interact with, the real world. When perception is limited to monocular vision avoiding collision becomes significantly more challenging due to the lack of 3D information. Conventional path planners for obstacle avoidance require tuning a number of parameters and do not have the ability to directly benefit from large datasets and continuous use. In this paper, a dueling architecture based deep double-Q network (D3QN) is proposed for obstacle avoidance, using only monocular RGB vision. Based on the dueling and double-Q mechanisms, D3QN can efficiently learn how to avoid obstacles in a simulator even with very noisy depth information predicted from RGB image. Extensive experiments show that D3QN enables twofold acceleration on learning compared with a normal deep Q network and the models trained solely in virtual environments can be directly transferred to real robots, generalizing well to various new environments with previously unseen dynamic objects.Comment: Accepted by RSS 2017 workshop New Frontiers for Deep Learning in Robotic

    Role Playing Learning for Socially Concomitant Mobile Robot Navigation

    Full text link
    In this paper, we present the Role Playing Learning (RPL) scheme for a mobile robot to navigate socially with its human companion in populated environments. Neural networks (NN) are constructed to parameterize a stochastic policy that directly maps sensory data collected by the robot to its velocity outputs, while respecting a set of social norms. An efficient simulative learning environment is built with maps and pedestrians trajectories collected from a number of real-world crowd data sets. In each learning iteration, a robot equipped with the NN policy is created virtually in the learning environment to play itself as a companied pedestrian and navigate towards a goal in a socially concomitant manner. Thus, we call this process Role Playing Learning, which is formulated under a reinforcement learning (RL) framework. The NN policy is optimized end-to-end using Trust Region Policy Optimization (TRPO), with consideration of the imperfectness of robot's sensor measurements. Simulative and experimental results are provided to demonstrate the efficacy and superiority of our method
    corecore