1,634 research outputs found

    Stereo-Camera–LiDAR Calibration for Autonomous Driving

    Get PDF
    Perception is one of the key factors to successful self-driving. According to recent studies in developing perception 3D range scanners combined with stereo camera vision are the most utilized sensors in autonomous vehicle perception systems. To enable accurate perception, the sensors must be calibrated before the sensor data can be fused. Calibration minimizes measurement errors caused by the nonidealities of individual sensors and errors caused by the transformation between different sensor frames. This thesis presents camera-LiDAR calibration, synchronisation, and data fusion techniques. It can be argued that the quality of data is more important to the calibration than the actual optimization algorithms, therefore, one challenge addressed in this thesis is accurate data collection with different calibration targets and result validation with different optimization algorithms. We estimated the vehicle windshield effect on camera calibration and show that the error caused by the windshield can be decreased by using more complex distortion models than the standard model. Synchronisation is required to ensure that sensors provide measurements at the same time. The sensor data used in this thesis was synchronized by using an external trigger signal from a GNSS receiver. The camera-LiDAR extrinsic calibration was performed using synchronised 3D-2D (LiDAR points and camera pixels) and 3D-3D (LiDAR points and stereo camera) point correspondences. This comparison demonstrates that the best method to estimate camera-LiDAR extrinsic parameters is to use 3D-2D point correspondences. Moreover, a comparison between camera-based and LiDAR 3D reconstruction is presented. Due to different sensors viewpoint, some data points are occluded, therefore, we propose a camera-LiDAR occlusion handling algorithm to remove occluded points. The quality of the calibration is demonstrated visually, by fusing and aligning the LiDAR point cloud and the image

    Accurate Single Image Multi-Modal Camera Pose Estimation

    Get PDF
    Abstract. A well known problem in photogrammetry and computer vision is the precise and robust determination of camera poses with respect to a given 3D model. In this work we propose a novel multi-modal method for single image camera pose estimation with respect to 3D models with intensity information (e.g., LiDAR data with reflectance information). We utilize a direct point based rendering approach to generate synthetic 2D views from 3D datasets in order to bridge the dimensionality gap. The proposed method then establishes 2D/2D point and local region correspondences based on a novel self-similarity distance measure. Correct correspondences are robustly identified by searching for small regions with a similar geometric relationship of local self-similarities using a Generalized Hough Transform. After backprojection of the generated features into 3D a standard Perspective-n-Points problem is solved to yield an initial camera pose. The pose is then accurately refined using an intensity based 2D/3D registration approach. An evaluation on Vis/IR 2D and airborne and terrestrial 3D datasets shows that the proposed method is applicable to a wide range of different sensor types. In addition, the approach outperforms standard global multi-modal 2D/3D registration approaches based on Mutual Information with respect to robustness and speed. Potential applications are widespread and include for instance multispectral texturing of 3D models, SLAM applications, sensor data fusion and multi-spectral camera calibration and super-resolution applications

    Online Targetless End-to-End Camera-LIDAR Self-calibration

    Get PDF
    In this paper we propose an end-to-end, automatic, online camera-LIDAR calibration approach, for application in self driving vehicle navigation. The main idea is to connect the image domain and the 3D space by generating point clouds from camera data while driving, using a structure from motion (SfM) pipeline, and use it as the basis for registration. As a core step of the algorithm we introduce an object level alignment to transform the generated and captured point clouds into a common coordinate system. Finally, we calculate the correspondences between the 2D image domain and the 3D LIDAR point clouds, to produce the registration. We evaluated the method in various different real life traffic scenarios
    corecore