4,820 research outputs found

    Towards Persistence-Based Reconstruction in Euclidean Spaces

    Get PDF
    Manifold reconstruction has been extensively studied for the last decade or so, especially in two and three dimensions. Recently, significant improvements were made in higher dimensions, leading to new methods to reconstruct large classes of compact subsets of Euclidean space Rd\R^d. However, the complexities of these methods scale up exponentially with d, which makes them impractical in medium or high dimensions, even for handling low-dimensional submanifolds. In this paper, we introduce a novel approach that stands in-between classical reconstruction and topological estimation, and whose complexity scales up with the intrinsic dimension of the data. Specifically, when the data points are sufficiently densely sampled from a smooth mm-submanifold of Rd\R^d, our method retrieves the homology of the submanifold in time at most c(m)n5c(m)n^5, where nn is the size of the input and c(m)c(m) is a constant depending solely on mm. It can also provably well handle a wide range of compact subsets of Rd\R^d, though with worse complexities. Along the way to proving the correctness of our algorithm, we obtain new results on \v{C}ech, Rips, and witness complex filtrations in Euclidean spaces

    Approximating Local Homology from Samples

    Full text link
    Recently, multi-scale notions of local homology (a variant of persistent homology) have been used to study the local structure of spaces around a given point from a point cloud sample. Current reconstruction guarantees rely on constructing embedded complexes which become difficult in high dimensions. We show that the persistence diagrams used for estimating local homology, can be approximated using families of Vietoris-Rips complexes, whose simple constructions are robust in any dimension. To the best of our knowledge, our results, for the first time, make applications based on local homology, such as stratification learning, feasible in high dimensions.Comment: 23 pages, 14 figure

    Exact Computation of a Manifold Metric, via Lipschitz Embeddings and Shortest Paths on a Graph

    Full text link
    Data-sensitive metrics adapt distances locally based the density of data points with the goal of aligning distances and some notion of similarity. In this paper, we give the first exact algorithm for computing a data-sensitive metric called the nearest neighbor metric. In fact, we prove the surprising result that a previously published 33-approximation is an exact algorithm. The nearest neighbor metric can be viewed as a special case of a density-based distance used in machine learning, or it can be seen as an example of a manifold metric. Previous computational research on such metrics despaired of computing exact distances on account of the apparent difficulty of minimizing over all continuous paths between a pair of points. We leverage the exact computation of the nearest neighbor metric to compute sparse spanners and persistent homology. We also explore the behavior of the metric built from point sets drawn from an underlying distribution and consider the more general case of inputs that are finite collections of path-connected compact sets. The main results connect several classical theories such as the conformal change of Riemannian metrics, the theory of positive definite functions of Schoenberg, and screw function theory of Schoenberg and Von Neumann. We develop novel proof techniques based on the combination of screw functions and Lipschitz extensions that may be of independent interest.Comment: 15 page

    On the Reconstruction of Geodesic Subspaces of RN\mathbb{R}^N

    Full text link
    We consider the topological and geometric reconstruction of a geodesic subspace of RN\mathbb{R}^N both from the \v{C}ech and Vietoris-Rips filtrations on a finite, Hausdorff-close, Euclidean sample. Our reconstruction technique leverages the intrinsic length metric induced by the geodesics on the subspace. We consider the distortion and convexity radius as our sampling parameters for a successful reconstruction. For a geodesic subspace with finite distortion and positive convexity radius, we guarantee a correct computation of its homotopy and homology groups from the sample. For geodesic subspaces of R2\mathbb{R}^2, we also devise an algorithm to output a homotopy equivalent geometric complex that has a very small Hausdorff distance to the unknown shape of interest

    Approximating Loops in a Shortest Homology Basis from Point Data

    Full text link
    Inference of topological and geometric attributes of a hidden manifold from its point data is a fundamental problem arising in many scientific studies and engineering applications. In this paper we present an algorithm to compute a set of loops from a point data that presumably sample a smooth manifold M⊂RdM\subset \mathbb{R}^d. These loops approximate a {\em shortest} basis of the one dimensional homology group H1(M)H_1(M) over coefficients in finite field Z2\mathbb{Z}_2. Previous results addressed the issue of computing the rank of the homology groups from point data, but there is no result on approximating the shortest basis of a manifold from its point sample. In arriving our result, we also present a polynomial time algorithm for computing a shortest basis of H1(K)H_1(K) for any finite {\em simplicial complex} KK whose edges have non-negative weights

    Topological exploration of artificial neuronal network dynamics

    Full text link
    One of the paramount challenges in neuroscience is to understand the dynamics of individual neurons and how they give rise to network dynamics when interconnected. Historically, researchers have resorted to graph theory, statistics, and statistical mechanics to describe the spatiotemporal structure of such network dynamics. Our novel approach employs tools from algebraic topology to characterize the global properties of network structure and dynamics. We propose a method based on persistent homology to automatically classify network dynamics using topological features of spaces built from various spike-train distances. We investigate the efficacy of our method by simulating activity in three small artificial neural networks with different sets of parameters, giving rise to dynamics that can be classified into four regimes. We then compute three measures of spike train similarity and use persistent homology to extract topological features that are fundamentally different from those used in traditional methods. Our results show that a machine learning classifier trained on these features can accurately predict the regime of the network it was trained on and also generalize to other networks that were not presented during training. Moreover, we demonstrate that using features extracted from multiple spike-train distances systematically improves the performance of our method

    Geometric Inference on Kernel Density Estimates

    Get PDF
    We show that geometric inference of a point cloud can be calculated by examining its kernel density estimate with a Gaussian kernel. This allows one to consider kernel density estimates, which are robust to spatial noise, subsampling, and approximate computation in comparison to raw point sets. This is achieved by examining the sublevel sets of the kernel distance, which isomorphically map to superlevel sets of the kernel density estimate. We prove new properties about the kernel distance, demonstrating stability results and allowing it to inherit reconstruction results from recent advances in distance-based topological reconstruction. Moreover, we provide an algorithm to estimate its topology using weighted Vietoris-Rips complexes.Comment: To appear in SoCG 2015. 36 pages, 5 figure
    • …
    corecore