4,046 research outputs found
Topological Machine Learning with Persistence Indicator Functions
Techniques from computational topology, in particular persistent homology,
are becoming increasingly relevant for data analysis. Their stable metrics
permit the use of many distance-based data analysis methods, such as
multidimensional scaling, while providing a firm theoretical ground. Many
modern machine learning algorithms, however, are based on kernels. This paper
presents persistence indicator functions (PIFs), which summarize persistence
diagrams, i.e., feature descriptors in topological data analysis. PIFs can be
calculated and compared in linear time and have many beneficial properties,
such as the availability of a kernel-based similarity measure. We demonstrate
their usage in common data analysis scenarios, such as confidence set
estimation and classification of complex structured data.Comment: Topology-based Methods in Visualization 201
Topological exploration of artificial neuronal network dynamics
One of the paramount challenges in neuroscience is to understand the dynamics
of individual neurons and how they give rise to network dynamics when
interconnected. Historically, researchers have resorted to graph theory,
statistics, and statistical mechanics to describe the spatiotemporal structure
of such network dynamics. Our novel approach employs tools from algebraic
topology to characterize the global properties of network structure and
dynamics.
We propose a method based on persistent homology to automatically classify
network dynamics using topological features of spaces built from various
spike-train distances. We investigate the efficacy of our method by simulating
activity in three small artificial neural networks with different sets of
parameters, giving rise to dynamics that can be classified into four regimes.
We then compute three measures of spike train similarity and use persistent
homology to extract topological features that are fundamentally different from
those used in traditional methods. Our results show that a machine learning
classifier trained on these features can accurately predict the regime of the
network it was trained on and also generalize to other networks that were not
presented during training. Moreover, we demonstrate that using features
extracted from multiple spike-train distances systematically improves the
performance of our method
Persistence Flamelets: multiscale Persistent Homology for kernel density exploration
In recent years there has been noticeable interest in the study of the "shape
of data". Among the many ways a "shape" could be defined, topology is the most
general one, as it describes an object in terms of its connectivity structure:
connected components (topological features of dimension 0), cycles (features of
dimension 1) and so on. There is a growing number of techniques, generally
denoted as Topological Data Analysis, aimed at estimating topological
invariants of a fixed object; when we allow this object to change, however,
little has been done to investigate the evolution in its topology. In this work
we define the Persistence Flamelets, a multiscale version of one of the most
popular tool in TDA, the Persistence Landscape. We examine its theoretical
properties and we show how it could be used to gain insights on KDEs bandwidth
parameter
- …