53 research outputs found
Hardness of Approximation for Morse Matching
Discrete Morse theory has emerged as a powerful tool for a wide range of
problems, including the computation of (persistent) homology. In this context,
discrete Morse theory is used to reduce the problem of computing a topological
invariant of an input simplicial complex to computing the same topological
invariant of a (significantly smaller) collapsed cell or chain complex.
Consequently, devising methods for obtaining gradient vector fields on
complexes to reduce the size of the problem instance has become an emerging
theme over the last decade. While computing the optimal gradient vector field
on a simplicial complex is NP-hard, several heuristics have been observed to
compute near-optimal gradient vector fields on a wide variety of datasets.
Understanding the theoretical limits of these strategies is therefore a
fundamental problem in computational topology. In this paper, we consider the
approximability of maximization and minimization variants of the Morse matching
problem, posed as open problems by Joswig and Pfetsch. We establish hardness
results for Max-Morse matching and Min-Morse matching. In particular, we show
that, for a simplicial complex with n simplices and dimension , it is
NP-hard to approximate Min-Morse matching within a factor of
, for any . Moreover, using an L-reduction
from Degree 3 Max-Acyclic Subgraph to Max-Morse matching, we show that it is
both NP-hard and UGC-hard to approximate Max-Morse matching for simplicial
complexes of dimension within certain explicit constant factors.Comment: 20 pages, 1 figur
Chunk Reduction for Multi-Parameter Persistent Homology
The extension of persistent homology to multi-parameter setups is an algorithmic challenge. Since most computation tasks scale badly with the size of the input complex, an important pre-processing step consists of simplifying the input while maintaining the homological information. We present an algorithm that drastically reduces the size of an input. Our approach is an extension of the chunk algorithm for persistent homology (Bauer et al., Topological Methods in Data Analysis and Visualization III, 2014). We show that our construction produces the smallest multi-filtered chain complex among all the complexes quasi-isomorphic to the input, improving on the guarantees of previous work in the context of discrete Morse theory. Our algorithm also offers an immediate parallelization scheme in shared memory. Already its sequential version compares favorably with existing simplification schemes, as we show by experimental evaluation
Chunk Reduction for Multi-Parameter Persistent Homology
The extension of persistent homology to multi-parameter setups is an
algorithmic challenge. Since most computation tasks scale badly with the size
of the input complex, an important pre-processing step consists of simplifying
the input while maintaining the homological information. We present an
algorithm that drastically reduces the size of an input. Our approach is an
extension of the chunk algorithm for persistent homology (Bauer et al.,
Topological Methods in Data Analysis and Visualization III, 2014). We show that
our construction produces the smallest multi-filtered chain complex among all
the complexes quasi-isomorphic to the input, improving on the guarantees of
previous work in the context of discrete Morse theory. Our algorithm also
offers an immediate parallelization scheme in shared memory. Already its
sequential version compares favorably with existing simplification schemes, as
we show by experimental evaluation
Fast Minimal Presentations of Bi-graded Persistence Modules
Multi-parameter persistent homology is a recent branch of topological data
analysis. In this area, data sets are investigated through the lens of homology
with respect to two or more scale parameters. The high computational cost of
many algorithms calls for a preprocessing step to reduce the input size. In
general, a minimal presentation is the smallest possible representation of a
persistence module. Lesnick and Wright proposed recently an algorithm (the
LW-algorithm) for computing minimal presentations based on matrix reduction. In
this work, we propose, implement and benchmark several improvements over the
LW-algorithm. Most notably, we propose the use of priority queues to avoid
extensive scanning of the matrix columns, which constitutes the computational
bottleneck in the LW-algorithm, and we combine their algorithm with ideas from
the multi-parameter chunk algorithm by Fugacci and Kerber. Our extensive
experiments show that our algorithm outperforms the LW-algorithm and computes
the minimal presentation for data sets with millions of simplices within a few
seconds. Our software is publicly available.Comment: This is an extended version of a paper that will appear at ALENEX
202
Computational and Theoretical Issues of Multiparameter Persistent Homology for Data Analysis
The basic goal of topological data analysis is to apply topology-based descriptors
to understand and describe the shape of data. In this context, homology is one of
the most relevant topological descriptors, well-appreciated for its discrete nature,
computability and dimension independence. A further development is provided
by persistent homology, which allows to track homological features along a oneparameter
increasing sequence of spaces. Multiparameter persistent homology, also
called multipersistent homology, is an extension of the theory of persistent homology
motivated by the need of analyzing data naturally described by several parameters,
such as vector-valued functions. Multipersistent homology presents several issues in
terms of feasibility of computations over real-sized data and theoretical challenges
in the evaluation of possible descriptors. The focus of this thesis is in the interplay
between persistent homology theory and discrete Morse Theory. Discrete Morse
theory provides methods for reducing the computational cost of homology and persistent
homology by considering the discrete Morse complex generated by the discrete
Morse gradient in place of the original complex. The work of this thesis addresses
the problem of computing multipersistent homology, to make such tool usable in real
application domains. This requires both computational optimizations towards the
applications to real-world data, and theoretical insights for finding and interpreting
suitable descriptors. Our computational contribution consists in proposing a new
Morse-inspired and fully discrete preprocessing algorithm. We show the feasibility
of our preprocessing over real datasets, and evaluate the impact of the proposed
algorithm as a preprocessing for computing multipersistent homology. A theoretical
contribution of this thesis consists in proposing a new notion of optimality for such
a preprocessing in the multiparameter context. We show that the proposed notion
generalizes an already known optimality notion from the one-parameter case. Under
this definition, we show that the algorithm we propose as a preprocessing is optimal
in low dimensional domains. In the last part of the thesis, we consider preliminary
applications of the proposed algorithm in the context of topology-based multivariate
visualization by tracking critical features generated by a discrete gradient field compatible
with the multiple scalar fields under study. We discuss (dis)similarities of such
critical features with the state-of-the-art techniques in topology-based multivariate
data visualization
Geometric and Topological Combinatorics
The 2007 Oberwolfach meeting âGeometric and Topological Combinatoricsâ presented a great variety of investigations where topological and algebraic methods are brought into play to solve combinatorial and geometric problems, but also where geometric and combinatorial ideas are applied to topological questions
- âŠ