1,025 research outputs found
Self-Calibration of Cameras with Euclidean Image Plane in Case of Two Views and Known Relative Rotation Angle
The internal calibration of a pinhole camera is given by five parameters that
are combined into an upper-triangular calibration matrix. If the
skew parameter is zero and the aspect ratio is equal to one, then the camera is
said to have Euclidean image plane. In this paper, we propose a non-iterative
self-calibration algorithm for a camera with Euclidean image plane in case the
remaining three internal parameters --- the focal length and the principal
point coordinates --- are fixed but unknown. The algorithm requires a set of point correspondences in two views and also the measured relative
rotation angle between the views. We show that the problem generically has six
solutions (including complex ones).
The algorithm has been implemented and tested both on synthetic data and on
publicly available real dataset. The experiments demonstrate that the method is
correct, numerically stable and robust.Comment: 13 pages, 7 eps-figure
Trifocal Relative Pose from Lines at Points and its Efficient Solution
We present a new minimal problem for relative pose estimation mixing point
features with lines incident at points observed in three views and its
efficient homotopy continuation solver. We demonstrate the generality of the
approach by analyzing and solving an additional problem with mixed point and
line correspondences in three views. The minimal problems include
correspondences of (i) three points and one line and (ii) three points and two
lines through two of the points which is reported and analyzed here for the
first time. These are difficult to solve, as they have 216 and - as shown here
- 312 solutions, but cover important practical situations when line and point
features appear together, e.g., in urban scenes or when observing curves. We
demonstrate that even such difficult problems can be solved robustly using a
suitable homotopy continuation technique and we provide an implementation
optimized for minimal problems that can be integrated into engineering
applications. Our simulated and real experiments demonstrate our solvers in the
camera geometry computation task in structure from motion. We show that new
solvers allow for reconstructing challenging scenes where the standard two-view
initialization of structure from motion fails.Comment: This material is based upon work supported by the National Science
Foundation under Grant No. DMS-1439786 while most authors were in residence
at Brown University's Institute for Computational and Experimental Research
in Mathematics -- ICERM, in Providence, R
Rectification from Radially-Distorted Scales
This paper introduces the first minimal solvers that jointly estimate lens
distortion and affine rectification from repetitions of rigidly transformed
coplanar local features. The proposed solvers incorporate lens distortion into
the camera model and extend accurate rectification to wide-angle images that
contain nearly any type of coplanar repeated content. We demonstrate a
principled approach to generating stable minimal solvers by the Grobner basis
method, which is accomplished by sampling feasible monomial bases to maximize
numerical stability. Synthetic and real-image experiments confirm that the
solvers give accurate rectifications from noisy measurements when used in a
RANSAC-based estimator. The proposed solvers demonstrate superior robustness to
noise compared to the state-of-the-art. The solvers work on scenes without
straight lines and, in general, relax the strong assumptions on scene content
made by the state-of-the-art. Accurate rectifications on imagery that was taken
with narrow focal length to near fish-eye lenses demonstrate the wide
applicability of the proposed method. The method is fully automated, and the
code is publicly available at https://github.com/prittjam/repeats.Comment: pre-prin
Efficient 2D-3D Matching for Multi-Camera Visual Localization
Visual localization, i.e., determining the position and orientation of a
vehicle with respect to a map, is a key problem in autonomous driving. We
present a multicamera visual inertial localization algorithm for large scale
environments. To efficiently and effectively match features against a pre-built
global 3D map, we propose a prioritized feature matching scheme for
multi-camera systems. In contrast to existing works, designed for monocular
cameras, we (1) tailor the prioritization function to the multi-camera setup
and (2) run feature matching and pose estimation in parallel. This
significantly accelerates the matching and pose estimation stages and allows us
to dynamically adapt the matching efforts based on the surrounding environment.
In addition, we show how pose priors can be integrated into the localization
system to increase efficiency and robustness. Finally, we extend our algorithm
by fusing the absolute pose estimates with motion estimates from a multi-camera
visual inertial odometry pipeline (VIO). This results in a system that provides
reliable and drift-less pose estimation. Extensive experiments show that our
localization runs fast and robust under varying conditions, and that our
extended algorithm enables reliable real-time pose estimation.Comment: 7 pages, 5 figure
- …