2 research outputs found
Automatic Calibration of Dual-LiDARs Using Two Poles Stickered with Retro-Reflective Tape
Multi-LiDAR systems have been prevalently applied in modern autonomous
vehicles to render a broad view of the environments. The rapid development of
5G wireless technologies has brought a breakthrough for current cellular
vehicle-to-everything (C-V2X) applications. Therefore, a novel localization and
perception system in which multiple LiDARs are mounted around cities for
autonomous vehicles has been proposed. However, the existing calibration
methods require specific hard-to-move markers, ego-motion, or good initial
values given by users. In this paper, we present a novel approach that enables
automatic multi-LiDAR calibration using two poles stickered with
retro-reflective tape. This method does not depend on prior environmental
information, initial values of the extrinsic parameters, or movable platforms
like a car. We analyze the LiDAR-pole model, verify the feasibility of the
algorithm through simulation data, and present a simple method to measure the
calibration errors w.r.t the ground truth. Experimental results demonstrate
that our approach gains better flexibility and higher accuracy when compared
with the state-of-the-art approach.Comment: 6 pages, 7 figures, 2019 IEEE Conference on Imaging Systems and
Techniques (IST
Improvements to Target-Based 3D LiDAR to Camera Calibration
The homogeneous transformation between a LiDAR and monocular camera is
required for sensor fusion tasks, such as SLAM. While determining such a
transformation is not considered glamorous in any sense of the word, it is
nonetheless crucial for many modern autonomous systems. Indeed, an error of a
few degrees in rotation or a few percent in translation can lead to 20 cm
translation errors at a distance of 5 m when overlaying a LiDAR image on a
camera image. The biggest impediments to determining the transformation
accurately are the relative sparsity of LiDAR point clouds and systematic
errors in their distance measurements. This paper proposes (1) the use of
targets of known dimension and geometry to ameliorate target pose estimation in
face of the quantization and systematic errors inherent in a LiDAR image of a
target, and (2) a fitting method for the LiDAR to monocular camera
transformation that fundamentally assumes the camera image data is the most
accurate information in one's possession