6,979 research outputs found
Learning gradients on manifolds
A common belief in high-dimensional data analysis is that data are
concentrated on a low-dimensional manifold. This motivates simultaneous
dimension reduction and regression on manifolds. We provide an algorithm for
learning gradients on manifolds for dimension reduction for high-dimensional
data with few observations. We obtain generalization error bounds for the
gradient estimates and show that the convergence rate depends on the intrinsic
dimension of the manifold and not on the dimension of the ambient space. We
illustrate the efficacy of this approach empirically on simulated and real data
and compare the method to other dimension reduction procedures.Comment: Published in at http://dx.doi.org/10.3150/09-BEJ206 the Bernoulli
(http://isi.cbs.nl/bernoulli/) by the International Statistical
Institute/Bernoulli Society (http://isi.cbs.nl/BS/bshome.htm
Spherical Regression: Learning Viewpoints, Surface Normals and 3D Rotations on n-Spheres
Many computer vision challenges require continuous outputs, but tend to be
solved by discrete classification. The reason is classification's natural
containment within a probability -simplex, as defined by the popular softmax
activation function. Regular regression lacks such a closed geometry, leading
to unstable training and convergence to suboptimal local minima. Starting from
this insight we revisit regression in convolutional neural networks. We observe
many continuous output problems in computer vision are naturally contained in
closed geometrical manifolds, like the Euler angles in viewpoint estimation or
the normals in surface normal estimation. A natural framework for posing such
continuous output problems are -spheres, which are naturally closed
geometric manifolds defined in the space. By introducing a
spherical exponential mapping on -spheres at the regression output, we
obtain well-behaved gradients, leading to stable training. We show how our
spherical regression can be utilized for several computer vision challenges,
specifically viewpoint estimation, surface normal estimation and 3D rotation
estimation. For all these problems our experiments demonstrate the benefit of
spherical regression. All paper resources are available at
https://github.com/leoshine/Spherical_Regression.Comment: CVPR 2019 camera read
Statistical Inference using the Morse-Smale Complex
The Morse-Smale complex of a function decomposes the sample space into
cells where is increasing or decreasing. When applied to nonparametric
density estimation and regression, it provides a way to represent, visualize,
and compare multivariate functions. In this paper, we present some statistical
results on estimating Morse-Smale complexes. This allows us to derive new
results for two existing methods: mode clustering and Morse-Smale regression.
We also develop two new methods based on the Morse-Smale complex: a
visualization technique for multivariate functions and a two-sample,
multivariate hypothesis test.Comment: 45 pages, 13 figures. Accepted to Electronic Journal of Statistic
Fast, asymptotically efficient, recursive estimation in a Riemannian manifold
Stochastic optimisation in Riemannian manifolds, especially the Riemannian
stochastic gradient method, has attracted much recent attention. The present
work applies stochastic optimisation to the task of recursive estimation of a
statistical parameter which belongs to a Riemannian manifold. Roughly, this
task amounts to stochastic minimisation of a statistical divergence function.
The following problem is considered : how to obtain fast, asymptotically
efficient, recursive estimates, using a Riemannian stochastic optimisation
algorithm with decreasing step sizes? In solving this problem, several original
results are introduced. First, without any convexity assumptions on the
divergence function, it is proved that, with an adequate choice of step sizes,
the algorithm computes recursive estimates which achieve a fast non-asymptotic
rate of convergence. Second, the asymptotic normality of these recursive
estimates is proved, by employing a novel linearisation technique. Third, it is
proved that, when the Fisher information metric is used to guide the algorithm,
these recursive estimates achieve an optimal asymptotic rate of convergence, in
the sense that they become asymptotically efficient. These results, while
relatively familiar in the Euclidean context, are here formulated and proved
for the first time, in the Riemannian context. In addition, they are
illustrated with a numerical application to the recursive estimation of
elliptically contoured distributions.Comment: updated version of draft submitted for publication, currently under
revie
- …