198 research outputs found
A Self-calibration Algorithm Based on a Unified Framework for Constraints on Multiple Views
In this paper, we propose a new self-calibration algorithm for upgrading projective space to Euclidean space. The proposed method aims to combine the most commonly used metric constraints, including zero skew and unit aspect-ratio by formulating each constraint as a cost function within a unified framework. Additional constraints, e.g., constant principal points, can also be formulated in the same framework. The cost function is very flexible and can be composed of different constraints on different views. The upgrade process is then stated as a minimization problem which may be solved by minimizing an upper bound of the cost function. This proposed method is non-iterative. Experimental results on synthetic data and real data are presented to show the performance of the proposed method and accuracy of the reconstructed scene. © 2012 The Author(s).published_or_final_versionSpringer Open Choice, 25 May 201
Is Dual Linear Self-Calibration Artificially Ambiguous?
International audienceThis purely theoretical work investigates the problem of artificial singularities in camera self-calibration. Self-calibration allows one to upgrade a projective reconstruction to metric and has a concise and well-understood formulation based on the Dual Absolute Quadric (DAQ), a rank-3 quadric envelope satisfying (nonlinear) 'spectral constraints': it must be positive of rank 3. The practical scenario we consider is the one of square pixels, known principal point and varying unknown focal length, for which generic Critical Motion Sequences (CMS) have been thoroughly derived. The standard linear self-calibration algorithm uses the DAQ paradigm but ignores the spectral constraints. It thus has artificial CMSs, which have barely been studied so far. We propose an algebraic model of singularities based on the confocal quadric theory. It allows to easily derive all types of CMSs. We first review the already known generic CMSs, for which any self-calibration algorithm fails. We then describe all CMSs for the standard linear self-calibration algorithm; among those are artificial CMSs caused by the above spectral constraints being neglected. We then show how to detect CMSs. If this is the case it is actually possible to uniquely identify the correct self-calibration solution, based on a notion of signature of quadrics. The main conclusion of this paper is that a posteriori enforcing the spectral constraints in linear self-calibration is discriminant enough to resolve all artificial CMSs
POSE: Pseudo Object Space Error for Initialization-Free Bundle Adjustment
Bundle adjustment is a nonlinear refinement method for
camera poses and 3D structure requiring sufficiently good
initialization. In recent years, it was experimentally observed
that useful minima can be reached even from arbitrary
initialization for affine bundle adjustment problems
(and fixed-rank matrix factorization instances in general).
The key success factor lies in the use of the variable projection
(VarPro) method, which is known to have a wide basin
of convergence for such problems. In this paper, we propose
the Pseudo Object Space Error (pOSE), which is an objective
with cameras represented as a hybrid between the affine
and projective models. This formulation allows us to obtain
3D reconstructions that are close to the true projective reconstructions
while retaining a bilinear problem structure
suitable for the VarPro method. Experimental results show
that using pOSE has a high success rate to yield faithful 3D
reconstructions from random initializations, taking one step
towards initialization-free structure from motion
Learning Single-Image Depth from Videos using Quality Assessment Networks
Depth estimation from a single image in the wild remains a challenging
problem. One main obstacle is the lack of high-quality training data for images
in the wild. In this paper we propose a method to automatically generate such
data through Structure-from-Motion (SfM) on Internet videos. The core of this
method is a Quality Assessment Network that identifies high-quality
reconstructions obtained from SfM. Using this method, we collect single-view
depth training data from a large number of YouTube videos and construct a new
dataset called YouTube3D. Experiments show that YouTube3D is useful in training
depth estimation networks and advances the state of the art of single-view
depth estimation in the wild
Autocalibration of Cameras with Known Pixel Shape
We present new algorithms for the recovery of the Euclidean structure
from a projective calibration of a set of cameras of known pixel shape but otherwise
arbitrarily varying intrinsic and extrinsic parameters. The algorithms have a geometrical
motivation based on the properties of the set of lines intersecting the absolute conic.
The theoretical part of the paper contributes with theoretical results that establish the
relationship between the geometrical object corresponding to this set of lines and other
equivalent objects as the absolute quadric. Finally, the satisfactory performance of the
techniques is demonstrated with synthetic and real data
Robust Self-calibration of Focal Lengths from the Fundamental Matrix
The problem of self-calibration of two cameras from a given fundamental
matrix is one of the basic problems in geometric computer vision. Under the
assumption of known principal points and square pixels, the well-known Bougnoux
formula offers a means to compute the two unknown focal lengths. However, in
many practical situations, the formula yields inaccurate results due to
commonly occurring singularities. Moreover, the estimates are sensitive to
noise in the computed fundamental matrix and to the assumed positions of the
principal points. In this paper, we therefore propose an efficient and robust
iterative method to estimate the focal lengths along with the principal points
of the cameras given a fundamental matrix and priors for the estimated camera
parameters. In addition, we study a computationally efficient check of models
generated within RANSAC that improves the accuracy of the estimated models
while reducing the total computational time. Extensive experiments on real and
synthetic data show that our iterative method brings significant improvements
in terms of the accuracy of the estimated focal lengths over the Bougnoux
formula and other state-of-the-art methods, even when relying on inaccurate
priors
Accelerated volumetric reconstruction from uncalibrated camera views
While both work with images, computer graphics and computer vision are inverse problems. Computer graphics starts traditionally with input geometric models and produces image sequences. Computer vision starts with input image sequences and produces geometric models. In the last few years, there has been a convergence of research to bridge the gap between the two fields.
This convergence has produced a new field called Image-based Rendering and Modeling (IBMR). IBMR represents the effort of using the geometric information recovered from real images to generate new images with the hope that the synthesized
ones appear photorealistic, as well as reducing the time spent on model creation.
In this dissertation, the capturing, geometric and photometric aspects of an IBMR system are studied. A versatile framework was developed that enables the reconstruction of scenes from images acquired with a handheld digital camera. The proposed system targets applications in areas such as Computer Gaming and Virtual Reality, from a lowcost perspective. In the spirit of IBMR, the human operator is allowed to provide the high-level information, while underlying algorithms are used to perform low-level computational work. Conforming to the latest architecture trends, we propose a streaming voxel carving method, allowing a fast GPU-based processing on commodity hardware
Camera calibration from surfaces of revolution
This paper addresses the problem of calibrating a pinhole camera from images of a surface of revolution. Camera calibration is the process of determining the intrinsic or internal parameters (i.e., aspect ratio, focal length, and principal point) of a camera, and it is important for both motion estimation and metric reconstruction of 3D models. In this paper, a novel and simple calibration technique is introduced, which is based on exploiting the symmetry of images of surfaces of revolution. Traditional techniques for camera calibration involve taking images of some precisely machined calibration pattern (such as a calibration grid). The use of surfaces of revolution, which are commonly found in daily life (e.g., bowls and vases), makes the process easier as a result of the reduced cost and increased accessibility of the calibration objects. In this paper, it is shown that two images of a surface of revolution will provide enough information for determining the aspect ratio, focal length, and principal point of a camera with fixed intrinsic parameters. The algorithms presented in this paper have been implemented and tested with both synthetic and real data. Experimental results show that the camera calibration method presented here is both practical and accurate.published_or_final_versio
- âŠ