128 research outputs found
Reflectance Intensity Assisted Automatic and Accurate Extrinsic Calibration of 3D LiDAR and Panoramic Camera Using a Printed Chessboard
This paper presents a novel method for fully automatic and convenient
extrinsic calibration of a 3D LiDAR and a panoramic camera with a normally
printed chessboard. The proposed method is based on the 3D corner estimation of
the chessboard from the sparse point cloud generated by one frame scan of the
LiDAR. To estimate the corners, we formulate a full-scale model of the
chessboard and fit it to the segmented 3D points of the chessboard. The model
is fitted by optimizing the cost function under constraints of correlation
between the reflectance intensity of laser and the color of the chessboard's
patterns. Powell's method is introduced for resolving the discontinuity problem
in optimization. The corners of the fitted model are considered as the 3D
corners of the chessboard. Once the corners of the chessboard in the 3D point
cloud are estimated, the extrinsic calibration of the two sensors is converted
to a 3D-2D matching problem. The corresponding 3D-2D points are used to
calculate the absolute pose of the two sensors with Unified Perspective-n-Point
(UPnP). Further, the calculated parameters are regarded as initial values and
are refined using the Levenberg-Marquardt method. The performance of the
proposed corner detection method from the 3D point cloud is evaluated using
simulations. The results of experiments, conducted on a Velodyne HDL-32e LiDAR
and a Ladybug3 camera under the proposed re-projection error metric,
qualitatively and quantitatively demonstrate the accuracy and stability of the
final extrinsic calibration parameters.Comment: 20 pages, submitted to the journal of Remote Sensin
Accurate Calibration of Multi-LiDAR-Multi-Camera Systems
As autonomous driving attracts more and more attention these days, the algorithms and sensors used for machine perception become popular in research, as well. This paper investigates the extrinsic calibration of two frequently-applied sensors: the camera and Light Detection and Ranging (LiDAR). The calibration can be done with the help of ordinary boxes. It contains an iterative refinement step, which is proven to converge to the box in the LiDAR point cloud, and can be used for system calibration containing multiple LiDARs and cameras. For that purpose, a bundle adjustment-like minimization is also presented. The accuracy of the method is evaluated on both synthetic and real-world data, outperforming the state-of-the-art techniques. The method is general in the sense that it is both LiDAR and camera-type independent, and only the intrinsic camera parameters have to be known. Finally, a method for determining the 2D bounding box of the car chassis from LiDAR point clouds is also presented in order to determine the car body border with respect to the calibrated sensors
3D Scanning System for Automatic High-Resolution Plant Phenotyping
Thin leaves, fine stems, self-occlusion, non-rigid and slowly changing
structures make plants difficult for three-dimensional (3D) scanning and
reconstruction -- two critical steps in automated visual phenotyping. Many
current solutions such as laser scanning, structured light, and multiview
stereo can struggle to acquire usable 3D models because of limitations in
scanning resolution and calibration accuracy. In response, we have developed a
fast, low-cost, 3D scanning platform to image plants on a rotating stage with
two tilting DSLR cameras centred on the plant. This uses new methods of camera
calibration and background removal to achieve high-accuracy 3D reconstruction.
We assessed the system's accuracy using a 3D visual hull reconstruction
algorithm applied on 2 plastic models of dicotyledonous plants, 2 sorghum
plants and 2 wheat plants across different sets of tilt angles. Scan times
ranged from 3 minutes (to capture 72 images using 2 tilt angles), to 30 minutes
(to capture 360 images using 10 tilt angles). The leaf lengths, widths, areas
and perimeters of the plastic models were measured manually and compared to
measurements from the scanning system: results were within 3-4% of each other.
The 3D reconstructions obtained with the scanning system show excellent
geometric agreement with all six plant specimens, even plants with thin leaves
and fine stems.Comment: 8 papes, DICTA 201
Calibração de sensores do ATLASCAR2 por otimização global
In autonomous vehicles, it is often necessary to install a large number of
sensors on board. Thus, the extrinsic calibration of these multi-sensory
systems is a problem of high relevance for the development algorithms of
autonomous driving or of assistance to the driving. This work proposes
a tool to automatically calibrate simultaneously multiple cameras. In the
process, aruco markers are used, which allows establishing a graph from
which the geometric transformations between the various cameras and a
global reference are extracted. Initially, markers are detected in the images
using an OpenCV tool. Subsequently, the graph is established where the
nodes are cameras or markers and the edges are the transformations between
them. Then an initial estimate of the extrinsic parameters of all cameras is
calculated based on the detections of the markers and the paths obtained
from the graph. In the end, an optimization of the parameters is done, where
the reprojection error is minimized. In order to demonstrate the process,
several datasets were created in order to validate the obtained results.Em veĂculos autĂłnomos Ă© frequente a necessidade de instalar um grande
nĂşmero de sensores a bordo. Assim, a calibração extrĂnseca destes sistemas
multi-sensoriais é um problema de grande relevância para o desenvolvimento
de algoritmos de condução autónoma ou de apoio á condução. Este trabalho
propõe um mecanismo capaz de fazer uma calibração automática em
simultaâneo de várias câmaras. No processo são usados marcadores aruco,
o que permite estabelecer um grafo de onde se extraem as transformações
geométricas entre as várias câmaras e um referencial global. Inicialmente, os
marcadores sĂŁo detetados nas imagens usando uma ferramenta do OpenCV.
Posteriormente é construido o grafo em que os nós são câmaras ou marcadores,
e as ligações entre nós são transformações geométricas em pares
câmara aruco. Em seguida é calculada uma estimativa inicial dos parâmetros
extrĂnsecos de todas as câmaras, baseada nas deteções dos marcadores e nos
caminhos obtidos do grafo. No fim, é feita uma otimização dos parâmetros,
onde é minimizado o erro de reprojeção. Para demonstrar o processo foram
criados vários "datasets", de modo a validar os resultados obtidos.Mestrado em Engenharia Mecânic
Calibration of RGB camera with velodyne LiDAR
Calibration of the LiDAR sensor with RGB camera finds its usage in many application fields from enhancing
image classification to the environment perception and mapping. This paper presents a pipeline for mutual pose
and orientation estimation of the mentioned sensors using a coarse to fine approach. Previously published methods
use multiple views of a known chessboard marker for computing the calibration parameters, or they are limited to
the calibration of the sensors with a small mutual displacement only. Our approach presents a novel 3D marker for
coarse calibration which can be robustly detected in both the camera image and the LiDAR scan. It also requires
only a single pair of camera-LiDAR frames for estimating large sensors displacement. Consequent refinement step
searches for more accurate calibration in small subspace of calibration parameters. The paper also presents a novel
way for evaluation of the calibration precision using projection error
External multi-modal imaging sensor calibration for sensor fusion: A review
Multi-modal data fusion has gained popularity due to its diverse applications, leading to an increased demand for external sensor calibration. Despite several proven calibration solutions, they fail to fully satisfy all the evaluation criteria, including accuracy, automation, and robustness. Thus, this review aims to contribute to this growing field by examining recent research on multi-modal imaging sensor calibration and proposing future research directions. The literature review comprehensively explains the various characteristics and conditions of different multi-modal external calibration methods, including traditional motion-based calibration and feature-based calibration. Target-based calibration and targetless calibration are two types of feature-based calibration, which are discussed in detail. Furthermore, the paper highlights systematic calibration as an emerging research direction. Finally, this review concludes crucial factors for evaluating calibration methods and provides a comprehensive discussion on their applications, with the aim of providing valuable insights to guide future research directions. Future research should focus primarily on the capability of online targetless calibration and systematic multi-modal sensor calibration.Ministerio de Ciencia, InnovaciĂłn y Universidades | Ref. PID2019-108816RB-I0
- …