7 research outputs found
The persistence landscape and some of its properties
Persistence landscapes map persistence diagrams into a function space, which
may often be taken to be a Banach space or even a Hilbert space. In the latter
case, it is a feature map and there is an associated kernel. The main advantage
of this summary is that it allows one to apply tools from statistics and
machine learning. Furthermore, the mapping from persistence diagrams to
persistence landscapes is stable and invertible. We introduce a weighted
version of the persistence landscape and define a one-parameter family of
Poisson-weighted persistence landscape kernels that may be useful for learning.
We also demonstrate some additional properties of the persistence landscape.
First, the persistence landscape may be viewed as a tropical rational function.
Second, in many cases it is possible to exactly reconstruct all of the
component persistence diagrams from an average persistence landscape. It
follows that the persistence landscape kernel is characteristic for certain
generic empirical measures. Finally, the persistence landscape distance may be
arbitrarily small compared to the interleaving distance.Comment: 18 pages, to appear in the Proceedings of the 2018 Abel Symposiu
Topological Data Analysis of Task-Based fMRI Data from Experiments on Schizophrenia
We use methods from computational algebraic topology to study functional
brain networks, in which nodes represent brain regions and weighted edges
encode the similarity of fMRI time series from each region. With these tools,
which allow one to characterize topological invariants such as loops in
high-dimensional data, we are able to gain understanding into low-dimensional
structures in networks in a way that complements traditional approaches that
are based on pairwise interactions. In the present paper, we use persistent
homology to analyze networks that we construct from task-based fMRI data from
schizophrenia patients, healthy controls, and healthy siblings of schizophrenia
patients. We thereby explore the persistence of topological structures such as
loops at different scales in these networks. We use persistence landscapes and
persistence images to create output summaries from our persistent-homology
calculations, and we study the persistence landscapes and images using
-means clustering and community detection. Based on our analysis of
persistence landscapes, we find that the members of the sibling cohort have
topological features (specifically, their 1-dimensional loops) that are
distinct from the other two cohorts. From the persistence images, we are able
to distinguish all three subject groups and to determine the brain regions in
the loops (with four or more edges) that allow us to make these distinctions
Persistence codebooks for topological data analysis
Persistent homology is a rigorous mathematical theory that provides a robust descriptor of data in the form of persistence diagrams (PDs) which are 2D multisets of points. Their variable size makes them, however, difficult to combine with typical machine learning workflows. In this paper we introduce persistence codebooks, a novel expressive and discriminative fixed-size vectorized representation of PDs that adapts to the inherent sparsity of persistence diagrams. To this end, we adapt bag-of-words, vectors of locally aggregated descriptors and Fischer vectors for the quantization of PDs. Persistence codebooks represent PDs in a convenient way for machine learning and statistical analysis and have a number of favorable practical and theoretical properties including 1-Wasserstein stability. We evaluate the presented representations on several heterogeneous datasets and show their (high) discriminative power. Our approach yields comparable-and partly even higher-performance in much less time than alternative approaches
PLLay: Efficient Topological Layer based on Persistence Landscapes
29 pages, 7 figuresInternational audienceWe propose PLLay, a novel topological layer for general deep learning models based on persistence landscapes, in which we can efficiently exploit the underlying topological features of the input data structure. In this work, we show differentiability with respect to layer inputs, for a general persistent homology with arbitrary filtration. Thus, our proposed layer can be placed anywhere in the network and feed critical information on the topological features of input data into subsequent layers to improve the learnability of the networks toward a given task. A task-optimal structure of PLLay is learned during training via backpropagation, without requiring any input featurization or data preprocessing. We provide a novel adaptation for the DTM function-based filtration, and show that the proposed layer is robust against noise and outliers through a stability analysis. We demonstrate the effectiveness of our approach by classification experiments on various datasets