1,806 research outputs found
Efficient generic calibration method for general cameras with single centre of projection
Generic camera calibration is a non-parametric calibration technique that is applicable to any type of vision sensor. However, the standard generic calibration method was developed with the goal of generality and it is therefore sub-optimal for the common case of cameras with a single centre of projection (e.g. pinhole, fisheye, hyperboloidal catadioptric). This paper proposes novel improvements to the standard generic calibration method for central cameras that reduce its complexity, and improve its accuracy and robustness. Improvements are achieved by taking advantage of the geometric constraints resulting from a single centre of projection. Input data for the algorithm is acquired using active grids, the performance of which is characterised. A new linear estimation stage to the generic algorithm is proposed incorporating classical pinhole calibration techniques, and it is shown to be significantly more accurate than the linear estimation stage of the standard method. A linear method for pose estimation is also proposed and evaluated against the existing polynomial method. Distortion correction and motion reconstruction experiments are conducted with real data for a hyperboloidal catadioptric sensor for both the standard and proposed methods. Results show the accuracy and robustness of the proposed method to be superior to those of the standard method
Non-parametric Models of Distortion in Imaging Systems.
Traditional radial lens distortion models are based on the physical construction of lenses. However, manufacturing defects and physical shock often cause the actual observed distortion to be different from what can be modeled by the physically motivated models.
In this work, we initially propose a Gaussian process radial distortion model as an alternative to the physically motivated models. The non-parametric nature of this model helps implicitly select the right model complexity, whereas for traditional distortion models one must perform explicit model selection to decide the right parametric complexity.
Next, we forego the radial distortion assumption and present a completely non-parametric, mathematically motivated distortion model based on locally-weighted homographies. The separation from an underlying physical model allows this model to capture arbitrary sources of distortion. We then apply this fully non-parametric distortion model to a zoom lens, where the distortion complexity can vary across zoom levels and the lens exhibits noticeable non-radial distortion.
Through our experiments and evaluation, we show that the proposed models are as accurate as the traditional parametric models at characterizing radial distortion while flexibly capturing non-radial distortion if present in the imaging system.PhDComputer Science and EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/120690/1/rpradeep_1.pd
BabelCalib: A Universal Approach to Calibrating Central Cameras
Existing calibration methods occasionally fail for large field-of-view cameras due to the non-linearity of the underlying problem and the lack of good initial values for all parameters of the used camera model. This might occur because a simpler projection model is assumed in an initial step, or a poor initial guess for the internal parameters is pre-defined. A lot of the difficulties of general camera calibration lie in the use of a forward projection model. We side-step these challenges by first proposing a solver to calibrate the parameters in terms of a back-projection model and then regress the parameters for a target forward model. These steps are incorporated in a robust estimation framework to cope with outlying detections. Extensive experiments demonstrate that our approach is very reliable and returns the most accurate calibration parameters as measured on the downstream task of absolute pose estimation on test sets. The code is released at https://github.com/ylochman/babelcalib
Towards dynamic camera calibration for constrained flexible mirror imaging
Flexible mirror imaging systems consisting of a perspective
camera viewing a scene reflected in a flexible mirror can provide direct control over image field-of-view and resolution. However, calibration of such systems is difficult due to the vast range of possible mirror shapes
and the flexible nature of the system. This paper proposes the fundamentals of a dynamic calibration approach for flexible mirror imaging systems by examining the constrained case of single dimensional flexing.
The calibration process consists of an initial primary calibration stage followed by in-service dynamic calibration. Dynamic calibration uses a
linear approximation to initialise a non-linear minimisation step, the result of which is the estimate of the mirror surface shape. The method is
easier to implement than existing calibration methods for flexible mirror imagers, requiring only two images of a calibration grid for each dynamic
calibration update. Experimental results with both simulated and real data are presented that demonstrate the capabilities of the proposed approach
Infrastructure-based Multi-Camera Calibration using Radial Projections
Multi-camera systems are an important sensor platform for intelligent systems
such as self-driving cars. Pattern-based calibration techniques can be used to
calibrate the intrinsics of the cameras individually. However, extrinsic
calibration of systems with little to no visual overlap between the cameras is
a challenge. Given the camera intrinsics, infrastucture-based calibration
techniques are able to estimate the extrinsics using 3D maps pre-built via SLAM
or Structure-from-Motion. In this paper, we propose to fully calibrate a
multi-camera system from scratch using an infrastructure-based approach.
Assuming that the distortion is mainly radial, we introduce a two-stage
approach. We first estimate the camera-rig extrinsics up to a single unknown
translation component per camera. Next, we solve for both the intrinsic
parameters and the missing translation components. Extensive experiments on
multiple indoor and outdoor scenes with multiple multi-camera systems show that
our calibration method achieves high accuracy and robustness. In particular,
our approach is more robust than the naive approach of first estimating
intrinsic parameters and pose per camera before refining the extrinsic
parameters of the system. The implementation is available at
https://github.com/youkely/InfrasCal.Comment: ECCV 202
Neural Lens Modeling
Recent methods for 3D reconstruction and rendering increasingly benefit from
end-to-end optimization of the entire image formation process. However, this
approach is currently limited: effects of the optical hardware stack and in
particular lenses are hard to model in a unified way. This limits the quality
that can be achieved for camera calibration and the fidelity of the results of
3D reconstruction. In this paper, we propose NeuroLens, a neural lens model for
distortion and vignetting that can be used for point projection and ray casting
and can be optimized through both operations. This means that it can
(optionally) be used to perform pre-capture calibration using classical
calibration targets, and can later be used to perform calibration or refinement
during 3D reconstruction, e.g., while optimizing a radiance field. To evaluate
the performance of our proposed model, we create a comprehensive dataset
assembled from the Lensfun database with a multitude of lenses. Using this and
other real-world datasets, we show that the quality of our proposed lens model
outperforms standard packages as well as recent approaches while being much
easier to use and extend. The model generalizes across many lens types and is
trivial to integrate into existing 3D reconstruction and rendering systems.Comment: To be presented at CVPR 2023, Project webpage:
https://neural-lens.github.i
- …