331 research outputs found
Automatic Extrinsic Calibration of Vision and Lidar by Maximizing Mutual Information
Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/112212/1/rob21542.pd
Reflectance Intensity Assisted Automatic and Accurate Extrinsic Calibration of 3D LiDAR and Panoramic Camera Using a Printed Chessboard
This paper presents a novel method for fully automatic and convenient
extrinsic calibration of a 3D LiDAR and a panoramic camera with a normally
printed chessboard. The proposed method is based on the 3D corner estimation of
the chessboard from the sparse point cloud generated by one frame scan of the
LiDAR. To estimate the corners, we formulate a full-scale model of the
chessboard and fit it to the segmented 3D points of the chessboard. The model
is fitted by optimizing the cost function under constraints of correlation
between the reflectance intensity of laser and the color of the chessboard's
patterns. Powell's method is introduced for resolving the discontinuity problem
in optimization. The corners of the fitted model are considered as the 3D
corners of the chessboard. Once the corners of the chessboard in the 3D point
cloud are estimated, the extrinsic calibration of the two sensors is converted
to a 3D-2D matching problem. The corresponding 3D-2D points are used to
calculate the absolute pose of the two sensors with Unified Perspective-n-Point
(UPnP). Further, the calculated parameters are regarded as initial values and
are refined using the Levenberg-Marquardt method. The performance of the
proposed corner detection method from the 3D point cloud is evaluated using
simulations. The results of experiments, conducted on a Velodyne HDL-32e LiDAR
and a Ladybug3 camera under the proposed re-projection error metric,
qualitatively and quantitatively demonstrate the accuracy and stability of the
final extrinsic calibration parameters.Comment: 20 pages, submitted to the journal of Remote Sensin
Continuous Online Extrinsic Calibration of Fisheye Camera and LiDAR
Automated driving systems use multi-modal sensor suites to ensure the
reliable, redundant and robust perception of the operating domain, for example
camera and LiDAR. An accurate extrinsic calibration is required to fuse the
camera and LiDAR data into a common spatial reference frame required by
high-level perception functions. Over the life of the vehicle the value of the
extrinsic calibration can change due physical disturbances, introducing an
error into the high-level perception functions. Therefore there is a need for
continuous online extrinsic calibration algorithms which can automatically
update the value of the camera-LiDAR calibration during the life of the vehicle
using only sensor data.
We propose using mutual information between the camera image's depth
estimate, provided by commonly available monocular depth estimation networks,
and the LiDAR pointcloud's geometric distance as a optimization metric for
extrinsic calibration. Our method requires no calibration target, no ground
truth training data and no expensive offline optimization. We demonstrate our
algorithm's accuracy, precision, speed and self-diagnosis capability on the
KITTI-360 data set.Comment: 4 page
Targetless Camera-LiDAR Calibration in Unstructured Environments
The camera-Lidar sensor fusion plays an important role in autonomous navigation research. Nowadays, the automatic calibration of these sensors remains a significant challenge in mobile robotics. In this article, we present a novel calibration method that achieves an accurate six-degree-of-freedom (6-DOF) rigid-body transformation estimation (aka extrinsic parameters) between the camera and LiDAR sensors. This method consists of a novel co-registration approach that uses local edge features in arbitrary environments to get 3D-to-2D errors between the data of both, camera and LiDAR. Once we have 3D-to-2D errors, we estimate the relative transform, i.e., the extrinsic parameters, that minimizes these errors. In order to find the best transform solution, we use the perspective-three-point (P3P) algorithm. To refine the final calibration, we use a Kalman Filter, which gives the system high stability against noise disturbances. The presented method does not require, in any case, an artificial target, or a structured environment, and therefore, it is a target-less calibration. Furthermore, the method we present in this article does not require to achieve a dense point cloud, which holds the advantage of not needing a scan accumulation. To test our approach, we use the state-of-the-art Kitti dataset, taking the calibration provided by the dataset as the ground truth. In this way, we achieve accuracy results, and we demonstrate the robustness of the system against very noisy observations.This work was supported by the Regional Valencian Community Government and the European Regional Development Fund (ERDF) through the grants ACIF/2019/088 and AICO/2019/020
- …