Implementation of Vision and Lidar Sensor Fusion Using Kalman Filter Algorithm

Abstract

Self-driving car is the next milestone of the automation industry. To achieve the level of autonomy expected in a self-driving car, the vehicle needs to be mounted with an assortment of sensors that can help the vehicle to perceive its three dimensional environment better which leads to better decision-making and control of the vehicle. Each sensor possesses different strengths and weaknesses; they can complement each other better when combined. This is done by a technique called sensor fusion wherein data from various sensors are put together in order to enhance the meaning and accuracy of the overall information. In real time implementations, uncertainty in factors that affect the vehicle's motion can lead to overshoot in parameters. In order to avoid that, an estimation filter is used to predict and update the fused values. This project focuses on sensor fusion of Lidar and Vision sensor (camera) followed by estimation using Kalman filter using values available from an online data set. It can be seen how the use of an estimation filter can significantly improve the accuracy in tracking the path of an obstacle

    Similar works