18,843 research outputs found
Learning Rank Reduced Interpolation with Principal Component Analysis
In computer vision most iterative optimization algorithms, both sparse and
dense, rely on a coarse and reliable dense initialization to bootstrap their
optimization procedure. For example, dense optical flow algorithms profit
massively in speed and robustness if they are initialized well in the basin of
convergence of the used loss function. The same holds true for methods as
sparse feature tracking when initial flow or depth information for new features
at arbitrary positions is needed. This makes it extremely important to have
techniques at hand that allow to obtain from only very few available
measurements a dense but still approximative sketch of a desired 2D structure
(e.g. depth maps, optical flow, disparity maps, etc.). The 2D map is regarded
as sample from a 2D random process. The method presented here exploits the
complete information given by the principal component analysis (PCA) of that
process, the principal basis and its prior distribution. The method is able to
determine a dense reconstruction from sparse measurement. When facing
situations with only very sparse measurements, typically the number of
principal components is further reduced which results in a loss of
expressiveness of the basis. We overcome this problem and inject prior
knowledge in a maximum a posterior (MAP) approach. We test our approach on the
KITTI and the virtual KITTI datasets and focus on the interpolation of depth
maps for driving scenes. The evaluation of the results show good agreement to
the ground truth and are clearly better than results of interpolation by the
nearest neighbor method which disregards statistical information.Comment: Accepted at Intelligent Vehicles Symposium (IV), Los Angeles, USA,
June 201
Manifold interpolation and model reduction
One approach to parametric and adaptive model reduction is via the
interpolation of orthogonal bases, subspaces or positive definite system
matrices. In all these cases, the sampled inputs stem from matrix sets that
feature a geometric structure and thus form so-called matrix manifolds. This
work will be featured as a chapter in the upcoming Handbook on Model Order
Reduction (P. Benner, S. Grivet-Talocia, A. Quarteroni, G. Rozza, W.H.A.
Schilders, L.M. Silveira, eds, to appear on DE GRUYTER) and reviews the
numerical treatment of the most important matrix manifolds that arise in the
context of model reduction. Moreover, the principal approaches to data
interpolation and Taylor-like extrapolation on matrix manifolds are outlined
and complemented by algorithms in pseudo-code.Comment: 37 pages, 4 figures, featured chapter of upcoming "Handbook on Model
Order Reduction
Optimal projection of observations in a Bayesian setting
Optimal dimensionality reduction methods are proposed for the Bayesian
inference of a Gaussian linear model with additive noise in presence of
overabundant data. Three different optimal projections of the observations are
proposed based on information theory: the projection that minimizes the
Kullback-Leibler divergence between the posterior distributions of the original
and the projected models, the one that minimizes the expected Kullback-Leibler
divergence between the same distributions, and the one that maximizes the
mutual information between the parameter of interest and the projected
observations. The first two optimization problems are formulated as the
determination of an optimal subspace and therefore the solution is computed
using Riemannian optimization algorithms on the Grassmann manifold. Regarding
the maximization of the mutual information, it is shown that there exists an
optimal subspace that minimizes the entropy of the posterior distribution of
the reduced model; a basis of the subspace can be computed as the solution to a
generalized eigenvalue problem; an a priori error estimate on the mutual
information is available for this particular solution; and that the
dimensionality of the subspace to exactly conserve the mutual information
between the input and the output of the models is less than the number of
parameters to be inferred. Numerical applications to linear and nonlinear
models are used to assess the efficiency of the proposed approaches, and to
highlight their advantages compared to standard approaches based on the
principal component analysis of the observations
- …