6 research outputs found

    Layering Laser Rangefinder Points onto Images for Obstacle Avoidance by a Neural Network

    No full text
    Obstacle avoidance is essential to autonomous robot navigation but maneuvering around an obstacle causes the system to deviate from its normal path. Oftentimes, these deviations cause the robot to enter new regions which lack the path’s usual or meaningful features. This is problematic for vision-based steering controllers, including convolutional neural networks (CNNs), which depend on patterns to be present in camera images. The absence of a path fails to provide consistent and noticeable patterns for the neural network, and this usually leads to erroneous steering commands. In this paper, we mitigate this problem by superimposing points from a two-dimensional (2D) scanning laser rangefinder (LRF) onto camera images using the Open Source Computer Vision (OpenCV) library. The visually encoded LRF data provides the CNN with a new pattern to recognize, aiding in the avoidance of obstacles and the rediscovery of its path. In contrast, existing approaches to robot navigation do not use a single CNN to perform line-following and obstacle avoidance. Using our approach, we were able to train a CNN to follow a lined path and avoid obstacles with a reliability rate of nearly 60% on a complex course and over 80% on a simpler course

    Autonomous Navigation via a Deep Q Network with One-Hot Image Encoding

    No full text
    Common autonomous driving techniques employ various combinations of convolutional and deep neural networks to safely and efficiently navigate unique road and traffic conditions. This paper investigates the feasibility of employing a reinforcement learning (RL) model for autonomous navigation using a low dimensional input. While many navigation applications generate each individual state as a function of a frame\u27s raw pixel information, we use a deep Q network (DQN) with reduced input dimensionality to train a mobile robot to continuously remain within a lane around an elliptical track. We accomplish this by using a one-hot encoding scheme that assigns a binary variable to each element in a square array. This value is a function of whether the input frame detects the presence of a lane boundary. Our ultimate goal was to determine the minimum number of training samples required to consistently train the robot to complete one cycle around the track, from multiple starting positions and directions, without crossing a lane boundary. We found that by intelligently balancing exploration and exploitation of its environment, as well as the rewards for staying in the lane, the robot was able to achieve its goal with a small number of samples
    corecore