4 research outputs found
Beyond Gr\"obner Bases: Basis Selection for Minimal Solvers
Many computer vision applications require robust estimation of the underlying
geometry, in terms of camera motion and 3D structure of the scene. These robust
methods often rely on running minimal solvers in a RANSAC framework. In this
paper we show how we can make polynomial solvers based on the action matrix
method faster, by careful selection of the monomial bases. These monomial bases
have traditionally been based on a Gr\"obner basis for the polynomial ideal.
Here we describe how we can enumerate all such bases in an efficient way. We
also show that going beyond Gr\"obner bases leads to more efficient solvers in
many cases. We present a novel basis sampling scheme that we evaluate on a
number of problems
On the complexity of polynomial reduction
In this paper, we present a new algorithm for reducing a multivariate polynomial with respect to an autoreduced tuple of other polynomials. In a suitable sparse complexity model, it is shown that the execution time is essentially the same (up to a logarithmic factor) as the time needed to verify that the result is correct. This is a first step towards making advantage of fast sparse polynomial arithmetic for the computation of Gröbner bases
Fast Reduction of Bivariate Polynomials with Respect to Sufficiently Regular Gröbner Bases
International audienc
Computational Methods for Computer Vision : Minimal Solvers and Convex Relaxations
Robust fitting of geometric models is a core problem in computer vision. The most common approach is to use a hypothesize-and-test framework, such as RANSAC. In these frameworks the model is estimated from as few measurements as possible, which minimizes the risk of selecting corrupted measurements. These estimation problems are called minimal problems, and they can often be formulated as systems of polynomial equations. In this thesis we present new methods for building so-called minimal solvers or polynomial solvers, which are specialized code for solving such systems. On several minimal problems we improve on the state-of-the-art both with respect to numerical stability and execution time.In many computer vision problems low rank matrices naturally occur. The rank can serve as a measure of model complexity and typically a low rank is desired. Optimization problems containing rank penalties or constraints are in general difficult. Recently convex relaxations, such as the nuclear norm, have been used to make these problems tractable. In this thesis we present new convex relaxations for rank-based optimization which avoid drawbacks of previous approaches and provide tighter relaxations. We evaluate our methods on a number of real and synthetic datasets and show state-of-the-art results