250 research outputs found

    Reflectance Intensity Assisted Automatic and Accurate Extrinsic Calibration of 3D LiDAR and Panoramic Camera Using a Printed Chessboard

    Full text link
    This paper presents a novel method for fully automatic and convenient extrinsic calibration of a 3D LiDAR and a panoramic camera with a normally printed chessboard. The proposed method is based on the 3D corner estimation of the chessboard from the sparse point cloud generated by one frame scan of the LiDAR. To estimate the corners, we formulate a full-scale model of the chessboard and fit it to the segmented 3D points of the chessboard. The model is fitted by optimizing the cost function under constraints of correlation between the reflectance intensity of laser and the color of the chessboard's patterns. Powell's method is introduced for resolving the discontinuity problem in optimization. The corners of the fitted model are considered as the 3D corners of the chessboard. Once the corners of the chessboard in the 3D point cloud are estimated, the extrinsic calibration of the two sensors is converted to a 3D-2D matching problem. The corresponding 3D-2D points are used to calculate the absolute pose of the two sensors with Unified Perspective-n-Point (UPnP). Further, the calculated parameters are regarded as initial values and are refined using the Levenberg-Marquardt method. The performance of the proposed corner detection method from the 3D point cloud is evaluated using simulations. The results of experiments, conducted on a Velodyne HDL-32e LiDAR and a Ladybug3 camera under the proposed re-projection error metric, qualitatively and quantitatively demonstrate the accuracy and stability of the final extrinsic calibration parameters.Comment: 20 pages, submitted to the journal of Remote Sensin

    Accurate Calibration of Multi-LiDAR-Multi-Camera Systems

    Get PDF
    As autonomous driving attracts more and more attention these days, the algorithms and sensors used for machine perception become popular in research, as well. This paper investigates the extrinsic calibration of two frequently-applied sensors: the camera and Light Detection and Ranging (LiDAR). The calibration can be done with the help of ordinary boxes. It contains an iterative refinement step, which is proven to converge to the box in the LiDAR point cloud, and can be used for system calibration containing multiple LiDARs and cameras. For that purpose, a bundle adjustment-like minimization is also presented. The accuracy of the method is evaluated on both synthetic and real-world data, outperforming the state-of-the-art techniques. The method is general in the sense that it is both LiDAR and camera-type independent, and only the intrinsic camera parameters have to be known. Finally, a method for determining the 2D bounding box of the car chassis from LiDAR point clouds is also presented in order to determine the car body border with respect to the calibrated sensors

    UAV-LiCAM SYSTEM DEVELOPMENT: CALIBRATION AND GEO-REFERENCING

    Get PDF
    In the last decade, applications of unmanned aerial vehicles (UAVs), as remote-sensing platforms, have extensively been investigated for fine-scale mapping, modeling and monitoring of the environment. In few recent years, integration of 3D laser scanners and cameras onboard UAVs has also received considerable attention as these two sensors provide complementary spatial/spectral information of the environment. Since lidar performs range and bearing measurements in its body-frame, precise GNSS/INS data are required to directly geo-reference the lidar measurements in an object-fixed coordinate system. However, such data comes at the price of tactical-grade inertial navigation sensors enabled with dual-frequency RTK-GNSS receivers, which also necessitates having access to a base station and proper post-processing software. Therefore, such UAV systems equipped with lidar and camera (UAV-LiCam Systems) are too expensive to be accessible to a wide range of users. Hence, new solutions must be developed to eliminate the need for costly navigation sensors. In this paper, a two-fold solution is proposed based on an in-house developed, low-cost system: 1) a multi-sensor self-calibration approach for calibrating the Li-Cam system based on planar and cylindrical multi-directional features; 2) an integrated sensor orientation method for georeferencing based on unscented particle filtering which compensates for time-variant IMU errors and eliminates the need for GNSS measurements

    External multi-modal imaging sensor calibration for sensor fusion: A review

    Get PDF
    Multi-modal data fusion has gained popularity due to its diverse applications, leading to an increased demand for external sensor calibration. Despite several proven calibration solutions, they fail to fully satisfy all the evaluation criteria, including accuracy, automation, and robustness. Thus, this review aims to contribute to this growing field by examining recent research on multi-modal imaging sensor calibration and proposing future research directions. The literature review comprehensively explains the various characteristics and conditions of different multi-modal external calibration methods, including traditional motion-based calibration and feature-based calibration. Target-based calibration and targetless calibration are two types of feature-based calibration, which are discussed in detail. Furthermore, the paper highlights systematic calibration as an emerging research direction. Finally, this review concludes crucial factors for evaluating calibration methods and provides a comprehensive discussion on their applications, with the aim of providing valuable insights to guide future research directions. Future research should focus primarily on the capability of online targetless calibration and systematic multi-modal sensor calibration.Ministerio de Ciencia, InnovaciĂłn y Universidades | Ref. PID2019-108816RB-I0
    • …
    corecore