22,905 research outputs found
Calibration Wizard: A Guidance System for Camera Calibration Based on Modelling Geometric and Corner Uncertainty
It is well known that the accuracy of a calibration depends strongly on the
choice of camera poses from which images of a calibration object are acquired.
We present a system -- Calibration Wizard -- that interactively guides a user
towards taking optimal calibration images. For each new image to be taken, the
system computes, from all previously acquired images, the pose that leads to
the globally maximum reduction of expected uncertainty on intrinsic parameters
and then guides the user towards that pose. We also show how to incorporate
uncertainty in corner point position in a novel principled manner, for both,
calibration and computation of the next best pose. Synthetic and real-world
experiments are performed to demonstrate the effectiveness of Calibration
Wizard.Comment: Oral presentation at ICCV 201
A mask-based approach for the geometric calibration of thermal-infrared cameras
Accurate and efficient thermal-infrared (IR) camera calibration is important for advancing computer vision research within the thermal modality. This paper presents an approach for geometrically calibrating individual and multiple cameras in both the thermal and visible modalities. The proposed technique can be used to correct for lens distortion and to simultaneously reference both visible and thermal-IR cameras to a single coordinate frame. The most popular existing approach for the geometric calibration of thermal cameras uses a printed chessboard heated by a flood lamp and is comparatively inaccurate and difficult to execute. Additionally, software toolkits provided for calibration either are unsuitable for this task or require substantial manual intervention. A new geometric mask with high thermal contrast and not requiring a flood lamp is presented as an alternative calibration pattern. Calibration points on the pattern are then accurately located using a clustering-based algorithm which utilizes the maximally stable extremal region detector. This algorithm is integrated into an automatic end-to-end system for calibrating single or multiple cameras. The evaluation shows that using the proposed mask achieves a mean reprojection error up to 78% lower than that using a heated chessboard. The effectiveness of the approach is further demonstrated by using it to calibrate two multiple-camera multiple-modality setups. Source code and binaries for the developed software are provided on the project Web site
3D Visual Perception for Self-Driving Cars using a Multi-Camera System: Calibration, Mapping, Localization, and Obstacle Detection
Cameras are a crucial exteroceptive sensor for self-driving cars as they are
low-cost and small, provide appearance information about the environment, and
work in various weather conditions. They can be used for multiple purposes such
as visual navigation and obstacle detection. We can use a surround multi-camera
system to cover the full 360-degree field-of-view around the car. In this way,
we avoid blind spots which can otherwise lead to accidents. To minimize the
number of cameras needed for surround perception, we utilize fisheye cameras.
Consequently, standard vision pipelines for 3D mapping, visual localization,
obstacle detection, etc. need to be adapted to take full advantage of the
availability of multiple cameras rather than treat each camera individually. In
addition, processing of fisheye images has to be supported. In this paper, we
describe the camera calibration and subsequent processing pipeline for
multi-fisheye-camera systems developed as part of the V-Charge project. This
project seeks to enable automated valet parking for self-driving cars. Our
pipeline is able to precisely calibrate multi-camera systems, build sparse 3D
maps for visual navigation, visually localize the car with respect to these
maps, generate accurate dense maps, as well as detect obstacles based on
real-time depth map extraction
3D Reconstruction with Low Resolution, Small Baseline and High Radial Distortion Stereo Images
In this paper we analyze and compare approaches for 3D reconstruction from
low-resolution (250x250), high radial distortion stereo images, which are
acquired with small baseline (approximately 1mm). These images are acquired
with the system NanEye Stereo manufactured by CMOSIS/AWAIBA. These stereo
cameras have also small apertures, which means that high levels of illumination
are required. The goal was to develop an approach yielding accurate
reconstructions, with a low computational cost, i.e., avoiding non-linear
numerical optimization algorithms. In particular we focused on the analysis and
comparison of radial distortion models. To perform the analysis and comparison,
we defined a baseline method based on available software and methods, such as
the Bouguet toolbox [2] or the Computer Vision Toolbox from Matlab. The
approaches tested were based on the use of the polynomial model of radial
distortion, and on the application of the division model. The issue of the
center of distortion was also addressed within the framework of the application
of the division model. We concluded that the division model with a single
radial distortion parameter has limitations
3D Reconstruction with Low Resolution, Small Baseline and High Radial Distortion Stereo Images
In this paper we analyze and compare approaches for 3D reconstruction from
low-resolution (250x250), high radial distortion stereo images, which are
acquired with small baseline (approximately 1mm). These images are acquired
with the system NanEye Stereo manufactured by CMOSIS/AWAIBA. These stereo
cameras have also small apertures, which means that high levels of illumination
are required. The goal was to develop an approach yielding accurate
reconstructions, with a low computational cost, i.e., avoiding non-linear
numerical optimization algorithms. In particular we focused on the analysis and
comparison of radial distortion models. To perform the analysis and comparison,
we defined a baseline method based on available software and methods, such as
the Bouguet toolbox [2] or the Computer Vision Toolbox from Matlab. The
approaches tested were based on the use of the polynomial model of radial
distortion, and on the application of the division model. The issue of the
center of distortion was also addressed within the framework of the application
of the division model. We concluded that the division model with a single
radial distortion parameter has limitations
Autocalibration with the Minimum Number of Cameras with Known Pixel Shape
In 3D reconstruction, the recovery of the calibration parameters of the
cameras is paramount since it provides metric information about the observed
scene, e.g., measures of angles and ratios of distances. Autocalibration
enables the estimation of the camera parameters without using a calibration
device, but by enforcing simple constraints on the camera parameters. In the
absence of information about the internal camera parameters such as the focal
length and the principal point, the knowledge of the camera pixel shape is
usually the only available constraint. Given a projective reconstruction of a
rigid scene, we address the problem of the autocalibration of a minimal set of
cameras with known pixel shape and otherwise arbitrarily varying intrinsic and
extrinsic parameters. We propose an algorithm that only requires 5 cameras (the
theoretical minimum), thus halving the number of cameras required by previous
algorithms based on the same constraint. To this purpose, we introduce as our
basic geometric tool the six-line conic variety (SLCV), consisting in the set
of planes intersecting six given lines of 3D space in points of a conic. We
show that the set of solutions of the Euclidean upgrading problem for three
cameras with known pixel shape can be parameterized in a computationally
efficient way. This parameterization is then used to solve autocalibration from
five or more cameras, reducing the three-dimensional search space to a
two-dimensional one. We provide experiments with real images showing the good
performance of the technique.Comment: 19 pages, 14 figures, 7 tables, J. Math. Imaging Vi
- …