4,691 research outputs found
Beyond Gr\"obner Bases: Basis Selection for Minimal Solvers
Many computer vision applications require robust estimation of the underlying
geometry, in terms of camera motion and 3D structure of the scene. These robust
methods often rely on running minimal solvers in a RANSAC framework. In this
paper we show how we can make polynomial solvers based on the action matrix
method faster, by careful selection of the monomial bases. These monomial bases
have traditionally been based on a Gr\"obner basis for the polynomial ideal.
Here we describe how we can enumerate all such bases in an efficient way. We
also show that going beyond Gr\"obner bases leads to more efficient solvers in
many cases. We present a novel basis sampling scheme that we evaluate on a
number of problems
A clever elimination strategy for efficient minimal solvers
We present a new insight into the systematic generation of minimal solvers in
computer vision, which leads to smaller and faster solvers. Many minimal
problem formulations are coupled sets of linear and polynomial equations where
image measurements enter the linear equations only. We show that it is useful
to solve such systems by first eliminating all the unknowns that do not appear
in the linear equations and then extending solutions to the rest of unknowns.
This can be generalized to fully non-linear systems by linearization via
lifting. We demonstrate that this approach leads to more efficient solvers in
three problems of partially calibrated relative camera pose computation with
unknown focal length and/or radial distortion. Our approach also generates new
interesting constraints on the fundamental matrices of partially calibrated
cameras, which were not known before.Comment: 13 pages, 7 figure
On the Issue of Camera Calibration with Narrow Angular Field of View
This paper considers the issue of calibrating a
camera with narrow angular field of view using standard, perspective
methods in computer vision. In doing so, the significance
of perspective distortion both for camera calibration and for
pose estimation is revealed. Since narrow angular field of view
cameras make it difficult to obtain rich images in terms of perspectivity,
the accuracy of the calibration results is expectedly low.
From this, we propose an alternative method that compensates for
this loss by utilizing the pose readings of a robotic manipulator.
It facilitates accurate pose estimation by nonlinear optimization,
minimizing reprojection errors and errors in the manipulator
transformations at the same time. Accurate pose estimation in
turn enables accurate parametrization of a perspective camera
Hybrid Focal Stereo Networks for Pattern Analysis in Homogeneous Scenes
In this paper we address the problem of multiple camera calibration in the
presence of a homogeneous scene, and without the possibility of employing
calibration object based methods. The proposed solution exploits salient
features present in a larger field of view, but instead of employing active
vision we replace the cameras with stereo rigs featuring a long focal analysis
camera, as well as a short focal registration camera. Thus, we are able to
propose an accurate solution which does not require intrinsic variation models
as in the case of zooming cameras. Moreover, the availability of the two views
simultaneously in each rig allows for pose re-estimation between rigs as often
as necessary. The algorithm has been successfully validated in an indoor
setting, as well as on a difficult scene featuring a highly dense pilgrim crowd
in Makkah.Comment: 13 pages, 6 figures, submitted to Machine Vision and Application
Efficient generic calibration method for general cameras with single centre of projection
Generic camera calibration is a non-parametric calibration technique that is applicable to any type of vision sensor. However, the standard generic calibration method was developed with the goal of generality and it is therefore sub-optimal for the common case of cameras with a single centre of projection (e.g. pinhole, fisheye, hyperboloidal catadioptric). This paper proposes novel improvements to the standard generic calibration method for central cameras that reduce its complexity, and improve its accuracy and robustness. Improvements are achieved by taking advantage of the geometric constraints resulting from a single centre of projection. Input data for the algorithm is acquired using active grids, the performance of which is characterised. A new linear estimation stage to the generic algorithm is proposed incorporating classical pinhole calibration techniques, and it is shown to be significantly more accurate than the linear estimation stage of the standard method. A linear method for pose estimation is also proposed and evaluated against the existing polynomial method. Distortion correction and motion reconstruction experiments are conducted with real data for a hyperboloidal catadioptric sensor for both the standard and proposed methods. Results show the accuracy and robustness of the proposed method to be superior to those of the standard method
Trust Your IMU: Consequences of Ignoring the IMU Drift
In this paper, we argue that modern pre-integration methods for inertial
measurement units (IMUs) are accurate enough to ignore the drift for short time
intervals. This allows us to consider a simplified camera model, which in turn
admits further intrinsic calibration. We develop the first-ever solver to
jointly solve the relative pose problem with unknown and equal focal length and
radial distortion profile while utilizing the IMU data. Furthermore, we show
significant speed-up compared to state-of-the-art algorithms, with small or
negligible loss in accuracy for partially calibrated setups. The proposed
algorithms are tested on both synthetic and real data, where the latter is
focused on navigation using unmanned aerial vehicles (UAVs). We evaluate the
proposed solvers on different commercially available low-cost UAVs, and
demonstrate that the novel assumption on IMU drift is feasible in real-life
applications. The extended intrinsic auto-calibration enables us to use
distorted input images, making tedious calibration processes obsolete, compared
to current state-of-the-art methods
Infrastructure-based Multi-Camera Calibration using Radial Projections
Multi-camera systems are an important sensor platform for intelligent systems
such as self-driving cars. Pattern-based calibration techniques can be used to
calibrate the intrinsics of the cameras individually. However, extrinsic
calibration of systems with little to no visual overlap between the cameras is
a challenge. Given the camera intrinsics, infrastucture-based calibration
techniques are able to estimate the extrinsics using 3D maps pre-built via SLAM
or Structure-from-Motion. In this paper, we propose to fully calibrate a
multi-camera system from scratch using an infrastructure-based approach.
Assuming that the distortion is mainly radial, we introduce a two-stage
approach. We first estimate the camera-rig extrinsics up to a single unknown
translation component per camera. Next, we solve for both the intrinsic
parameters and the missing translation components. Extensive experiments on
multiple indoor and outdoor scenes with multiple multi-camera systems show that
our calibration method achieves high accuracy and robustness. In particular,
our approach is more robust than the naive approach of first estimating
intrinsic parameters and pose per camera before refining the extrinsic
parameters of the system. The implementation is available at
https://github.com/youkely/InfrasCal.Comment: ECCV 202
- …