1,257 research outputs found
On the optimality of shape and data representation in the spectral domain
A proof of the optimality of the eigenfunctions of the Laplace-Beltrami
operator (LBO) in representing smooth functions on surfaces is provided and
adapted to the field of applied shape and data analysis. It is based on the
Courant-Fischer min-max principle adapted to our case. % The theorem we present
supports the new trend in geometry processing of treating geometric structures
by using their projection onto the leading eigenfunctions of the decomposition
of the LBO. Utilisation of this result can be used for constructing numerically
efficient algorithms to process shapes in their spectrum. We review a couple of
applications as possible practical usage cases of the proposed optimality
criteria. % We refer to a scale invariant metric, which is also invariant to
bending of the manifold. This novel pseudo-metric allows constructing an LBO by
which a scale invariant eigenspace on the surface is defined. We demonstrate
the efficiency of an intermediate metric, defined as an interpolation between
the scale invariant and the regular one, in representing geometric structures
while capturing both coarse and fine details. Next, we review a numerical
acceleration technique for classical scaling, a member of a family of
flattening methods known as multidimensional scaling (MDS). There, the
optimality is exploited to efficiently approximate all geodesic distances
between pairs of points on a given surface, and thereby match and compare
between almost isometric surfaces. Finally, we revisit the classical principal
component analysis (PCA) definition by coupling its variational form with a
Dirichlet energy on the data manifold. By pairing the PCA with the LBO we can
handle cases that go beyond the scope defined by the observation set that is
handled by regular PCA
Low Complexity Regularization of Linear Inverse Problems
Inverse problems and regularization theory is a central theme in contemporary
signal processing, where the goal is to reconstruct an unknown signal from
partial indirect, and possibly noisy, measurements of it. A now standard method
for recovering the unknown signal is to solve a convex optimization problem
that enforces some prior knowledge about its structure. This has proved
efficient in many problems routinely encountered in imaging sciences,
statistics and machine learning. This chapter delivers a review of recent
advances in the field where the regularization prior promotes solutions
conforming to some notion of simplicity/low-complexity. These priors encompass
as popular examples sparsity and group sparsity (to capture the compressibility
of natural signals and images), total variation and analysis sparsity (to
promote piecewise regularity), and low-rank (as natural extension of sparsity
to matrix-valued data). Our aim is to provide a unified treatment of all these
regularizations under a single umbrella, namely the theory of partial
smoothness. This framework is very general and accommodates all low-complexity
regularizers just mentioned, as well as many others. Partial smoothness turns
out to be the canonical way to encode low-dimensional models that can be linear
spaces or more general smooth manifolds. This review is intended to serve as a
one stop shop toward the understanding of the theoretical properties of the
so-regularized solutions. It covers a large spectrum including: (i) recovery
guarantees and stability to noise, both in terms of -stability and
model (manifold) identification; (ii) sensitivity analysis to perturbations of
the parameters involved (in particular the observations), with applications to
unbiased risk estimation ; (iii) convergence properties of the forward-backward
proximal splitting scheme, that is particularly well suited to solve the
corresponding large-scale regularized optimization problem
DIMAL: Deep Isometric Manifold Learning Using Sparse Geodesic Sampling
This paper explores a fully unsupervised deep learning approach for computing
distance-preserving maps that generate low-dimensional embeddings for a certain
class of manifolds. We use the Siamese configuration to train a neural network
to solve the problem of least squares multidimensional scaling for generating
maps that approximately preserve geodesic distances. By training with only a
few landmarks, we show a significantly improved local and nonlocal
generalization of the isometric mapping as compared to analogous non-parametric
counterparts. Importantly, the combination of a deep-learning framework with a
multidimensional scaling objective enables a numerical analysis of network
architectures to aid in understanding their representation power. This provides
a geometric perspective to the generalizability of deep learning.Comment: 10 pages, 11 Figure
- …