5 research outputs found

    Accurate extrinsic calibration between monocular camera and sparse 3D Lidar points without markers

    Get PDF
    It is of practical interest to automatically calibrate the multiple sensors in autonomous vehicles. In this paper, we deal with an interesting case when used low-resolution Lidar and present a practical approach to extrinsic calibration between monocular camera and Lidar with sparse 3D measurements. We formulate the problem as directly minimizing the feature error evaluated between frames following the way of image warping. To overcome the difficulties in the optimization problem, we propose to use the distance transform and further projection error model to obtain the key approximated edge points that are sensitive to the loss function. Finally, the loss minimization is solved by an efficient random selection algorithm. Experimental results on KITTI dataset show that our proposed method can achieve competitive results and an improvement in translation estimation particularly.The work is support by National Nature Science Foundation of China under Grant No. 61375050, Grant No. 91220301 and Grant No. 61420106007, and funded in part by Australian Research Council Grants of DP120103896, LP100100588, DE140100180 ARC Centre of Excellence on Robotic Vision (CE140100016) and NICTA (Data61). The first author is funded by the Chinese Scholarship Council (CSC) to be a joint PhD student from NUDT to ANU

    Choosing a time and place for calibration of lidar-camera systems

    No full text
    We propose a calibration method that automatically estimates the extrinsic calibration between a sensor pose-graph from natural scenes. The sensor pose-graph represents a system of sensors comprising of lidars and cameras, without sensor co-visibility constraints. The method addresses the fact that each scene contributes differently to the calibration problem by introducing a diligent scene selection scheme. The algorithm searches over all scenes to extract a subset of exemplars, whose joint optimisation yields progressively better calibration estimates. This non-parametric method requires no knowledge of the physical world, and continuously finds scenes that better constrain the optimisation parameters. We explain the theory, implement the method, and provide detailed performance analyses with experiments on real-world data
    corecore