3,257 research outputs found
Beyond Gr\"obner Bases: Basis Selection for Minimal Solvers
Many computer vision applications require robust estimation of the underlying
geometry, in terms of camera motion and 3D structure of the scene. These robust
methods often rely on running minimal solvers in a RANSAC framework. In this
paper we show how we can make polynomial solvers based on the action matrix
method faster, by careful selection of the monomial bases. These monomial bases
have traditionally been based on a Gr\"obner basis for the polynomial ideal.
Here we describe how we can enumerate all such bases in an efficient way. We
also show that going beyond Gr\"obner bases leads to more efficient solvers in
many cases. We present a novel basis sampling scheme that we evaluate on a
number of problems
A clever elimination strategy for efficient minimal solvers
We present a new insight into the systematic generation of minimal solvers in
computer vision, which leads to smaller and faster solvers. Many minimal
problem formulations are coupled sets of linear and polynomial equations where
image measurements enter the linear equations only. We show that it is useful
to solve such systems by first eliminating all the unknowns that do not appear
in the linear equations and then extending solutions to the rest of unknowns.
This can be generalized to fully non-linear systems by linearization via
lifting. We demonstrate that this approach leads to more efficient solvers in
three problems of partially calibrated relative camera pose computation with
unknown focal length and/or radial distortion. Our approach also generates new
interesting constraints on the fundamental matrices of partially calibrated
cameras, which were not known before.Comment: 13 pages, 7 figure
Infrastructure-based Multi-Camera Calibration using Radial Projections
Multi-camera systems are an important sensor platform for intelligent systems
such as self-driving cars. Pattern-based calibration techniques can be used to
calibrate the intrinsics of the cameras individually. However, extrinsic
calibration of systems with little to no visual overlap between the cameras is
a challenge. Given the camera intrinsics, infrastucture-based calibration
techniques are able to estimate the extrinsics using 3D maps pre-built via SLAM
or Structure-from-Motion. In this paper, we propose to fully calibrate a
multi-camera system from scratch using an infrastructure-based approach.
Assuming that the distortion is mainly radial, we introduce a two-stage
approach. We first estimate the camera-rig extrinsics up to a single unknown
translation component per camera. Next, we solve for both the intrinsic
parameters and the missing translation components. Extensive experiments on
multiple indoor and outdoor scenes with multiple multi-camera systems show that
our calibration method achieves high accuracy and robustness. In particular,
our approach is more robust than the naive approach of first estimating
intrinsic parameters and pose per camera before refining the extrinsic
parameters of the system. The implementation is available at
https://github.com/youkely/InfrasCal.Comment: ECCV 202
MLPnP - A Real-Time Maximum Likelihood Solution to the Perspective-n-Point Problem
In this paper, a statistically optimal solution to the Perspective-n-Point
(PnP) problem is presented. Many solutions to the PnP problem are geometrically
optimal, but do not consider the uncertainties of the observations. In
addition, it would be desirable to have an internal estimation of the accuracy
of the estimated rotation and translation parameters of the camera pose. Thus,
we propose a novel maximum likelihood solution to the PnP problem, that
incorporates image observation uncertainties and remains real-time capable at
the same time. Further, the presented method is general, as is works with 3D
direction vectors instead of 2D image points and is thus able to cope with
arbitrary central camera models. This is achieved by projecting (and thus
reducing) the covariance matrices of the observations to the corresponding
vector tangent space.Comment: Submitted to the ISPRS congress (2016) in Prague. Oral Presentation.
Published in ISPRS Ann. Photogramm. Remote Sens. Spatial Inf. Sci., III-3,
131-13
Trust Your IMU: Consequences of Ignoring the IMU Drift
In this paper, we argue that modern pre-integration methods for inertial
measurement units (IMUs) are accurate enough to ignore the drift for short time
intervals. This allows us to consider a simplified camera model, which in turn
admits further intrinsic calibration. We develop the first-ever solver to
jointly solve the relative pose problem with unknown and equal focal length and
radial distortion profile while utilizing the IMU data. Furthermore, we show
significant speed-up compared to state-of-the-art algorithms, with small or
negligible loss in accuracy for partially calibrated setups. The proposed
algorithms are tested on both synthetic and real data, where the latter is
focused on navigation using unmanned aerial vehicles (UAVs). We evaluate the
proposed solvers on different commercially available low-cost UAVs, and
demonstrate that the novel assumption on IMU drift is feasible in real-life
applications. The extended intrinsic auto-calibration enables us to use
distorted input images, making tedious calibration processes obsolete, compared
to current state-of-the-art methods
Calibrated and Partially Calibrated Semi-Generalized Homographies
In this paper, we propose the first minimal solutions for estimating the
semi-generalized homography given a perspective and a generalized camera. The
proposed solvers use five 2D-2D image point correspondences induced by a scene
plane. One of them assumes the perspective camera to be fully calibrated, while
the other solver estimates the unknown focal length together with the absolute
pose parameters. This setup is particularly important in structure-from-motion
and image-based localization pipelines, where a new camera is localized in each
step with respect to a set of known cameras and 2D-3D correspondences might not
be available. As a consequence of a clever parametrization and the elimination
ideal method, our approach only needs to solve a univariate polynomial of
degree five or three. The proposed solvers are stable and efficient as
demonstrated by a number of synthetic and real-world experiments
Hybrid Focal Stereo Networks for Pattern Analysis in Homogeneous Scenes
In this paper we address the problem of multiple camera calibration in the
presence of a homogeneous scene, and without the possibility of employing
calibration object based methods. The proposed solution exploits salient
features present in a larger field of view, but instead of employing active
vision we replace the cameras with stereo rigs featuring a long focal analysis
camera, as well as a short focal registration camera. Thus, we are able to
propose an accurate solution which does not require intrinsic variation models
as in the case of zooming cameras. Moreover, the availability of the two views
simultaneously in each rig allows for pose re-estimation between rigs as often
as necessary. The algorithm has been successfully validated in an indoor
setting, as well as on a difficult scene featuring a highly dense pilgrim crowd
in Makkah.Comment: 13 pages, 6 figures, submitted to Machine Vision and Application
- âŠ