Leveraging Deep Learning Based Object Detection for Localising Autonomous Personal Mobility Devices in Sparse Maps

Abstract

© 2019 IEEE. This paper presents a low cost, resource efficient localisation approach for autonomous driving in GPS denied environments. One of the most challenging aspects of traditional landmark based localisation in the context of autonomous driving, is the necessity to accurately and frequently detect landmarks. We leverage the state of the art deep learning framework, YOLO (You Only Look Once), to carry out this important perceptual task using data obtained from monocular cameras. Extracted bearing only information from the YOLO framework, and vehicle odometry, is fused using an Extended Kalman Filter (EKF) to generate an estimate of the location of the autonomous vehicle, together with it's associated uncertainty. This approach enables us to achieve real-time sub metre localisation accuracy, using only a sparse map of an outdoor urban environment. The broader motivation of this research is to improve the safety and reliability of Personal Mobility Devices (PMDs) through autonomous technology. Thus, all the ideas presented here are demonstrated using an instrumented mobility scooter platform

    Similar works

    Full text

    thumbnail-image

    Available Versions

    Last time updated on 10/08/2021