86 research outputs found

    Principal subbundles for dimension reduction

    Full text link
    In this paper we demonstrate how sub-Riemannian geometry can be used for manifold learning and surface reconstruction by combining local linear approximations of a point cloud to obtain lower dimensional bundles. Local approximations obtained by local PCAs are collected into a rank kk tangent subbundle on Rd\mathbb{R}^d, k<dk<d, which we call a principal subbundle. This determines a sub-Riemannian metric on Rd\mathbb{R}^d. We show that sub-Riemannian geodesics with respect to this metric can successfully be applied to a number of important problems, such as: explicit construction of an approximating submanifold MM, construction of a representation of the point-cloud in Rk\mathbb{R}^k, and computation of distances between observations, taking the learned geometry into account. The reconstruction is guaranteed to equal the true submanifold in the limit case where tangent spaces are estimated exactly. Via simulations, we show that the framework is robust when applied to noisy data. Furthermore, the framework generalizes to observations on an a priori known Riemannian manifold

    Geodesics in Heat

    Full text link
    We introduce the heat method for computing the shortest geodesic distance to a specified subset (e.g., point or curve) of a given domain. The heat method is robust, efficient, and simple to implement since it is based on solving a pair of standard linear elliptic problems. The method represents a significant breakthrough in the practical computation of distance on a wide variety of geometric domains, since the resulting linear systems can be prefactored once and subsequently solved in near-linear time. In practice, distance can be updated via the heat method an order of magnitude faster than with state-of-the-art methods while maintaining a comparable level of accuracy. We provide numerical evidence that the method converges to the exact geodesic distance in the limit of refinement; we also explore smoothed approximations of distance suitable for applications where more regularity is required

    Estimating the Reach of a Manifold

    Get PDF
    Various problems in manifold estimation make use of a quantity called the reach, denoted by τ_M\tau\_M, which is a measure of the regularity of the manifold. This paper is the first investigation into the problem of how to estimate the reach. First, we study the geometry of the reach through an approximation perspective. We derive new geometric results on the reach for submanifolds without boundary. An estimator τ^\hat{\tau} of τ_M\tau\_{M} is proposed in a framework where tangent spaces are known, and bounds assessing its efficiency are derived. In the case of i.i.d. random point cloud X_n\mathbb{X}\_{n}, τ^(X_n)\hat{\tau}(\mathbb{X}\_{n}) is showed to achieve uniform expected loss bounds over a C3\mathcal{C}^3-like model. Finally, we obtain upper and lower bounds on the minimax rate for estimating the reach

    RSA-INR:Riemannian Shape Autoencoding via 4D Implicit Neural Representations

    Get PDF
    Shape encoding and shape analysis are valuable tools for comparing shapes and for dimensionality reduction. A specific framework for shape analysis is the Large Deformation Diffeomorphic Metric Mapping (LDDMM) framework, which is capable of shape matching and dimensionality reduction. Researchers have recently introduced neural networks into this framework. However, these works can not match more than two objects simultaneously or have suboptimal performance in shape variability modeling. The latter limitation occurs as the works do not use state-of-the-art shape encoding methods. Moreover, the literature does not discuss the connection between the LDDMM Riemannian distance and the Riemannian geometry for deep learning literature. Our work aims to bridge this gap by demonstrating how LDDMM can integrate Riemannian geometry into deep learning. Furthermore, we discuss how deep learning solves and generalizes shape matching and dimensionality reduction formulations of LDDMM. We achieve both goals by designing a novel implicit encoder for shapes. This model extends a neural network-based algorithm for LDDMM-based pairwise registration, results in a nonlinear manifold PCA, and adds a Riemannian geometry aspect to deep learning models for shape variability modeling. Additionally, we demonstrate that the Riemannian geometry component improves the reconstruction procedure of the implicit encoder in terms of reconstruction quality and stability to noise. We hope our discussion paves the way to more research into how Riemannian geometry, shape/image analysis, and deep learning can be combined

    Rda-inr:Riemannian Diffeomorphic Autoencoding via Implicit Neural Representations

    Get PDF
    Diffeomorphic registration frameworks such as Large Deformation Diffeomorphic Metric Mapping (LDDMM) are used in computer graphics and the medical domain for atlas building, statistical latent modeling, and pairwise and groupwise registration. In recent years, researchers have developed neural network-based approaches regarding diffeomorphic registration to improve the accuracy and computational efficiency of traditional methods. In this work, we focus on a limitation of neural network-based atlas building and statistical latent modeling methods, namely that they either are (i) resolution dependent or (ii) disregard any data/problem-specific geometry needed for proper mean-variance analysis. In particular, we overcome this limitation by designing a novel encoder based on resolution-independent implicit neural representations. The encoder achieves resolution invariance for LDDMM-based statistical latent modeling. Additionally, the encoder adds LDDMM Riemannian geometry to resolution-independent deep learning models for statistical latent modeling. We showcase that the Riemannian geometry aspect improves latent modeling and is required for a proper mean-variance analysis. Furthermore, to showcase the benefit of resolution independence for LDDMM-based data variability modeling, we show that our approach outperforms another neural network-based LDDMM latent code model. Our work paves a way to more research into how Riemannian geometry, shape/image analysis, and deep learning can be combined

    Riemannian Multi-Manifold Modeling

    Full text link
    This paper advocates a novel framework for segmenting a dataset in a Riemannian manifold MM into clusters lying around low-dimensional submanifolds of MM. Important examples of MM, for which the proposed clustering algorithm is computationally efficient, are the sphere, the set of positive definite matrices, and the Grassmannian. The clustering problem with these examples of MM is already useful for numerous application domains such as action identification in video sequences, dynamic texture clustering, brain fiber segmentation in medical imaging, and clustering of deformed images. The proposed clustering algorithm constructs a data-affinity matrix by thoroughly exploiting the intrinsic geometry and then applies spectral clustering. The intrinsic local geometry is encoded by local sparse coding and more importantly by directional information of local tangent spaces and geodesics. Theoretical guarantees are established for a simplified variant of the algorithm even when the clusters intersect. To avoid complication, these guarantees assume that the underlying submanifolds are geodesic. Extensive validation on synthetic and real data demonstrates the resiliency of the proposed method against deviations from the theoretical model as well as its superior performance over state-of-the-art techniques

    Density estimation on an unknown submanifold

    Get PDF
    We investigate density estimation from a nn-sample in the Euclidean space RD\mathbb R^D, when the data is supported by an unknown submanifold MM of possibly unknown dimension d<Dd < D under a reach condition. We study nonparametric kernel methods for pointwise and integrated loss, with data-driven bandwidths that incorporate some learning of the geometry via a local dimension estimator. When ff has H\"older smoothness β\beta and MM has regularity α\alpha in a sense to be defined, our estimator achieves the rate nαβ/(2αβ+d)n^{-\alpha \wedge \beta/(2\alpha \wedge \beta+d)} and does not depend on the ambient dimension DD and is asymptotically minimax for αβ\alpha \geq \beta. Following Lepski's principle, a bandwidth selection rule is shown to achieve smoothness adaptation. We also investigate the case αβ\alpha \leq \beta: by estimating in some sense the underlying geometry of MM, we establish in dimension d=1d=1 that the minimax rate is nβ/(2β+1)n^{-\beta/(2\beta+1)} proving in particular that it does not depend on the regularity of MM. Finally, a numerical implementation is conducted on some case studies in order to confirm the practical feasibility of our estimators
    corecore