280 research outputs found
Extrinsic Methods for Coding and Dictionary Learning on Grassmann Manifolds
Sparsity-based representations have recently led to notable results in
various visual recognition tasks. In a separate line of research, Riemannian
manifolds have been shown useful for dealing with features and models that do
not lie in Euclidean spaces. With the aim of building a bridge between the two
realms, we address the problem of sparse coding and dictionary learning over
the space of linear subspaces, which form Riemannian structures known as
Grassmann manifolds. To this end, we propose to embed Grassmann manifolds into
the space of symmetric matrices by an isometric mapping. This in turn enables
us to extend two sparse coding schemes to Grassmann manifolds. Furthermore, we
propose closed-form solutions for learning a Grassmann dictionary, atom by
atom. Lastly, to handle non-linearity in data, we extend the proposed Grassmann
sparse coding and dictionary learning algorithms through embedding into Hilbert
spaces.
Experiments on several classification tasks (gender recognition, gesture
classification, scene analysis, face recognition, action recognition and
dynamic texture classification) show that the proposed approaches achieve
considerable improvements in discrimination accuracy, in comparison to
state-of-the-art methods such as kernelized Affine Hull Method and
graph-embedding Grassmann discriminant analysis.Comment: Appearing in International Journal of Computer Visio
Manifold Learning in Medical Imaging
Manifold learning theory has seen a surge of interest in the modeling of large and extensive datasets in medical imaging since they capture the essence of data in a way that fundamentally outperforms linear methodologies, the purpose of which is to essentially describe things that are flat. This problematic is particularly relevant with medical imaging data, where linear techniques are frequently unsuitable for capturing variations in anatomical structures. In many cases, there is enough structure in the data (CT, MRI, ultrasound) so a lower dimensional object can describe the degrees of freedom, such as in a manifold structure. Still, complex, multivariate distributions tend to demonstrate highly variable structural topologies that are impossible to capture with a single manifold learning algorithm. This chapter will present recent techniques developed in manifold theory for medical imaging analysis, to allow for statistical organ shape modeling, image segmentation and registration from the concept of navigation of manifolds, classification, as well as disease prediction models based on discriminant manifolds. We will present the theoretical basis of these works, with illustrative results on their applications from various organs and pathologies, including neurodegenerative diseases and spinal deformities
Parametric Regression on the Grassmannian
We address the problem of fitting parametric curves on the Grassmann manifold
for the purpose of intrinsic parametric regression. As customary in the
literature, we start from the energy minimization formulation of linear
least-squares in Euclidean spaces and generalize this concept to general
nonflat Riemannian manifolds, following an optimal-control point of view. We
then specialize this idea to the Grassmann manifold and demonstrate that it
yields a simple, extensible and easy-to-implement solution to the parametric
regression problem. In fact, it allows us to extend the basic geodesic model to
(1) a time-warped variant and (2) cubic splines. We demonstrate the utility of
the proposed solution on different vision problems, such as shape regression as
a function of age, traffic-speed estimation and crowd-counting from
surveillance video clips. Most notably, these problems can be conveniently
solved within the same framework without any specifically-tailored steps along
the processing pipeline.Comment: 14 pages, 11 figure
Méthodes numériques et statistiques pour l'analyse de trajectoire dans un cadre de geométrie Riemannienne.
This PhD proposes new Riemannian geometry tools for the analysis of longitudinal observations of neuro-degenerative subjects. First, we propose a numerical scheme to compute the parallel transport along geodesics. This scheme is efficient as long as the co-metric can be computed efficiently. Then, we tackle the issue of Riemannian manifold learning. We provide some minimal theoretical sanity checks to illustrate that the procedure of Riemannian metric estimation can be relevant. Then, we propose to learn a Riemannian manifold so as to model subject's progressions as geodesics on this manifold. This allows fast inference, extrapolation and classification of the subjects.Cette thèse porte sur l'élaboration d'outils de géométrie riemannienne et de leur application en vue de la modélisation longitudinale de sujets atteints de maladies neuro-dégénératives. Dans une première partie, nous prouvons la convergence d'un schéma numérique pour le transport parallèle. Ce schéma reste efficace tant que l'inverse de la métrique peut être calculé rapidement. Dans une deuxième partie, nous proposons l'apprentissage une variété et une métrique riemannienne. Après quelques résultats théoriques encourageants, nous proposons d'optimiser la modélisation de progression de sujets comme des géodésiques sur cette variété
Image Registration and Predictive Modeling: Learning the Metric on the Space of Diffeomorphisms
We present a method for metric optimization in the Large Deformation Diffeomorphic Metric Mapping (LDDMM) framework, by treating the induced Riemannian metric on the space of diffeomorphisms as a kernel in a machine learning context. For simplicity, we choose the kernel Fischer Linear Discriminant Analysis (KLDA) as the framework. Optimizing the kernel parameters in an Expectation-Maximization framework, we define model fidelity via the hinge loss of the decision function. The resulting algorithm optimizes the parameters of the LDDMM norm-inducing differential operator as a solution to a group-wise registration and classification problem. In practice, this may lead to a biology-aware registration, focusing its attention on the predictive task at hand such as identifying the effects of disease. We first tested our algorithm on a synthetic dataset, showing that our parameter selection improves registration quality and classification accuracy. We then tested the algorithm on 3D subcortical shapes from the Schizophrenia cohort Schizconnect. Our Schizophrenia-Control predictive model showed significant improvement in ROC AUC compared to baseline parameters
Information Geometry of Wasserstein Statistics on Shapes and Affine Deformations
Information geometry and Wasserstein geometry are two main structures
introduced in a manifold of probability distributions, and they capture its
different characteristics. We study characteristics of Wasserstein geometry in
the framework of Li and Zhao (2023) for the affine deformation statistical
model, which is a multi-dimensional generalization of the location-scale model.
We compare merits and demerits of estimators based on information geometry and
Wasserstein geometry. The shape of a probability distribution and its affine
deformation are separated in the Wasserstein geometry, showing its robustness
against the waveform perturbation in exchange for the loss in Fisher
efficiency. We show that the Wasserstein estimator is the moment estimator in
the case of the elliptically symmetric affine deformation model. It coincides
with the information-geometrical estimator (maximum-likelihood estimator) when
and only when the waveform is Gaussian. The role of the Wasserstein efficiency
is elucidated in terms of robustness against waveform change
Density estimation and modeling on symmetric spaces
In many applications, data and/or parameters are supported on non-Euclidean
manifolds. It is important to take into account the geometric structure of
manifolds in statistical analysis to avoid misleading results. Although there
has been a considerable focus on simple and specific manifolds, there is a lack
of general and easy-to-implement statistical methods for density estimation and
modeling on manifolds. In this article, we consider a very broad class of
manifolds: non-compact Riemannian symmetric spaces. For this class, we provide
a very general mathematical result for easily calculating volume changes of the
exponential and logarithm map between the tangent space and the manifold. This
allows one to define statistical models on the tangent space, push these models
forward onto the manifold, and easily calculate induced distributions by
Jacobians. To illustrate the statistical utility of this theoretical result, we
provide a general method to construct distributions on symmetric spaces. In
particular, we define the log-Gaussian distribution as an analogue of the
multivariate Gaussian distribution in Euclidean space. With these new kernels
on symmetric spaces, we also consider the problem of density estimation. Our
proposed approach can use any existing density estimation approach designed for
Euclidean spaces and push it forward to the manifold with an easy-to-calculate
adjustment. We provide theorems showing that the induced density estimators on
the manifold inherit the statistical optimality properties of the parent
Euclidean density estimator; this holds for both frequentist and Bayesian
nonparametric methods. We illustrate the theory and practical utility of the
proposed approach on the space of positive definite matrices
Generalized Linear Models for Geometrical Current predictors. An application to predict garment fit
The aim of this paper is to model an ordinal response variable in terms
of vector-valued functional data included on a vector-valued RKHS. In particular,
we focus on the vector-valued RKHS obtained when a geometrical object (body) is
characterized by a current and on the ordinal regression model. A common way to
solve this problem in functional data analysis is to express the data in the orthonormal
basis given by decomposition of the covariance operator. But our data present very important differences with respect to the usual functional data setting. On the one
hand, they are vector-valued functions, and on the other, they are functions in an
RKHS with a previously defined norm. We propose to use three different bases: the
orthonormal basis given by the kernel that defines the RKHS, a basis obtained from
decomposition of the integral operator defined using the covariance function, and a
third basis that combines the previous two. The three approaches are compared and
applied to an interesting problem: building a model to predict the fit of children’s
garment sizes, based on a 3D database of the Spanish child population. Our proposal
has been compared with alternative methods that explore the performance of other
classifiers (Suppport Vector Machine and k-NN), and with the result of applying
the classification method proposed in this work, from different characterizations of
the objects (landmarks and multivariate anthropometric measurements instead of
currents), obtaining in all these cases worst results
Positive Definite Kernels in Machine Learning
This survey is an introduction to positive definite kernels and the set of
methods they have inspired in the machine learning literature, namely kernel
methods. We first discuss some properties of positive definite kernels as well
as reproducing kernel Hibert spaces, the natural extension of the set of
functions associated with a kernel defined
on a space . We discuss at length the construction of kernel
functions that take advantage of well-known statistical models. We provide an
overview of numerous data-analysis methods which take advantage of reproducing
kernel Hilbert spaces and discuss the idea of combining several kernels to
improve the performance on certain tasks. We also provide a short cookbook of
different kernels which are particularly useful for certain data-types such as
images, graphs or speech segments.Comment: draft. corrected a typo in figure
- …