1,573 research outputs found
Dynamic Estimation of Rigid Motion from Perspective Views via Recursive Identification of Exterior Differential Systems with Parameters on a Topological Manifold
We formulate the problem of estimating the motion of a rigid object viewed under perspective projection as the identification of a dynamic model in Exterior Differential form with parameters on a topological manifold.
We first describe a general method for recursive identification of nonlinear implicit systems using prediction error criteria. The parameters are allowed to move slowly on some topological (not necessarily smooth) manifold. The basic recursion is solved in two different ways: one is based on a simple extension of the traditional Kalman Filter to nonlinear and implicit measurement constraints, the other may be regarded as a generalized "Gauss-Newton" iteration, akin to traditional Recursive Prediction Error Method techniques in linear identification. A derivation of the "Implicit Extended Kalman Filter" (IEKF) is reported in the appendix.
The ID framework is then applied to solving the visual motion problem: it indeed is possible to characterize it in terms of identification of an Exterior Differential System with parameters living on a C0 topological manifold, called the "essential manifold". We consider two alternative estimation paradigms. The first is in the local coordinates of the essential manifold: we estimate the state of a nonlinear implicit model on a linear space. The second is obtained by a linear update on the (linear) embedding space followed by a projection onto the essential manifold. These schemes proved successful in performing the motion estimation task, as we show in experiments on real and noisy synthetic image sequences
Let's Make Block Coordinate Descent Go Fast: Faster Greedy Rules, Message-Passing, Active-Set Complexity, and Superlinear Convergence
Block coordinate descent (BCD) methods are widely-used for large-scale
numerical optimization because of their cheap iteration costs, low memory
requirements, amenability to parallelization, and ability to exploit problem
structure. Three main algorithmic choices influence the performance of BCD
methods: the block partitioning strategy, the block selection rule, and the
block update rule. In this paper we explore all three of these building blocks
and propose variations for each that can lead to significantly faster BCD
methods. We (i) propose new greedy block-selection strategies that guarantee
more progress per iteration than the Gauss-Southwell rule; (ii) explore
practical issues like how to implement the new rules when using "variable"
blocks; (iii) explore the use of message-passing to compute matrix or Newton
updates efficiently on huge blocks for problems with a sparse dependency
between variables; and (iv) consider optimal active manifold identification,
which leads to bounds on the "active set complexity" of BCD methods and leads
to superlinear convergence for certain problems with sparse solutions (and in
some cases finite termination at an optimal solution). We support all of our
findings with numerical results for the classic machine learning problems of
least squares, logistic regression, multi-class logistic regression, label
propagation, and L1-regularization
The Riemannian interpretation of Gauss-Newton and Scoring, with application to system identification
Projective Bundle Adjustment from Arbitrary Initialization Using the Variable Projection Method
Bundle adjustment is used in structure-from-motion pipelines as final refinement stage requiring a sufficiently good initialization to reach a useful local mininum. Starting from an arbitrary initialization almost always gets trapped in a poor minimum. In this work we aim to obtain an initialization-free approach which returns global minima from a large proportion of purely random starting points. Our key inspiration lies in the success of the Variable Projection (VarPro) method for affine factorization problems, which have close to 100% chance of reaching a global minimum from random initialization. We find empirically that this desirable behaviour does not directly carry over to the projective case, and we consequently design and evaluate strategies to overcome this limitation. Also, by unifying the affine and the projective camera settings, we obtain numerically better conditioned reformulations of original bundle adjustment algorithms
Recursive Importance Sketching for Rank Constrained Least Squares: Algorithms and High-order Convergence
In this paper, we propose a new {\it \underline{R}ecursive} {\it
\underline{I}mportance} {\it \underline{S}ketching} algorithm for {\it
\underline{R}ank} constrained least squares {\it \underline{O}ptimization}
(RISRO). As its name suggests, the algorithm is based on a new sketching
framework, recursive importance sketching. Several existing algorithms in the
literature can be reinterpreted under the new sketching framework and RISRO
offers clear advantages over them. RISRO is easy to implement and
computationally efficient, where the core procedure in each iteration is only
solving a dimension reduced least squares problem. Different from numerous
existing algorithms with locally geometric convergence rate, we establish the
local quadratic-linear and quadratic rate of convergence for RISRO under some
mild conditions. In addition, we discover a deep connection of RISRO to
Riemannian manifold optimization on fixed rank matrices. The effectiveness of
RISRO is demonstrated in two applications in machine learning and statistics:
low-rank matrix trace regression and phase retrieval. Simulation studies
demonstrate the superior numerical performance of RISRO
- …