45 research outputs found
Partitions versus sets : a case of duality
In a recent paper, Amini et al. introduce a general framework to prove
duality theorems between special decompositions and their dual combinatorial
object. They thus unify all known ad-hoc proofs in one single theorem. While
this unification process is definitely good, their main theorem remains quite
technical and does not give a real insight of why some decompositions admit
dual objects and why others do not. The goal of this paper is both to
generalise a little this framework and to give an enlightening simple proof of
its central theorem
Nearly Tight Spectral Sparsification of Directed Hypergraphs by a Simple Iterative Sampling Algorithm
Spectral hypergraph sparsification, which is an attempt to extend well-known
spectral graph sparsification to hypergraphs, has been extensively studied over
the past few years. For undirected hypergraphs, Kapralov, Krauthgamer, Tardos,
and Yoshida (2022) have recently obtained an algorithm for constructing an
-spectral sparsifier of optimal size, where
suppresses the and factors, while the optimal
sparsifier size has not been known for directed hypergraphs. In this paper, we
present the first algorithm for constructing an -spectral
sparsifier for a directed hypergraph with hyperarcs. This improves
the previous bound by Kapralov, Krauthgamer, Tardos, and Yoshida (2021), and it
is optimal up to the and factors since there is a
lower bound of even for directed graphs. For general directed
hypergraphs, we show the first non-trivial lower bound of
.
Our algorithm can be regarded as an extension of the spanner-based graph
sparsification by Koutis and Xu (2016). To exhibit the power of the
spanner-based approach, we also examine a natural extension of Koutis and Xu's
algorithm to undirected hypergraphs. We show that it outputs an
-spectral sparsifier of an undirected hypergraph with
hyperedges, where is the maximum size of a hyperedge. Our analysis of the
undirected case is based on that of Bansal, Svensson, and Trevisan (2019), and
the bound matches that of the hypergraph sparsification algorithm by Bansal et
al. We further show that our algorithm inherits advantages of the spanner-based
sparsification in that it is fast, can be implemented in parallel, and can be
converted to be fault-tolerant
Recommended from our members
Multiscale and High-Dimensional Problems
High-dimensional problems appear naturally in various scientific areas. Two primary examples are PDEs describing complex processes in computational chemistry and physics, and stochastic/ parameter-dependent PDEs arising in uncertainty quantification and optimal control. Other highly visible examples are big data analysis including regression and classification which typically encounters high-dimensional data as input and/or output. High dimensional problems cannot be solved by traditional numerical techniques, because of the so-called curse of dimensionality. Rather, they require the development of novel theoretical and computational approaches to make them tractable and to capture fine resolutions and relevant features. Paradoxically, increasing computational power may even serve to heighten this demand, since the wealth of new computational data itself becomes a major obstruction. Extracting essential information from complex structures and developing rigorous models to quantify the quality of information in a high dimensional setting constitute challenging tasks from both theoretical and numerical perspective.
The last decade has seen the emergence of several new computational methodologies which address the obstacles to solving high dimensional problems. These include adaptive methods based on mesh refinement or sparsity, random forests, model reduction, compressed sensing, sparse grid and hyperbolic wavelet approximations, and various new tensor structures. Their common features are the nonlinearity of the solution method that prioritize variables and separate solution characteristics living on different scales. These methods have already drastically advanced the frontiers of computability for certain problem classes.
This workshop proposed to deepen the understanding of the underlying mathematical concepts that drive this new evolution of computational methods and to promote the exchange of ideas emerging in various disciplines about how to treat multiscale and high-dimensional problems
How to Walk Your Dog in the Mountains with No Magic Leash
We describe a -approximation algorithm for computing the
homotopic \Frechet distance between two polygonal curves that lie on the
boundary of a triangulated topological disk. Prior to this work, algorithms
were known only for curves on the Euclidean plane with polygonal obstacles.
A key technical ingredient in our analysis is a -approximation
algorithm for computing the minimum height of a homotopy between two curves. No
algorithms were previously known for approximating this parameter.
Surprisingly, it is not even known if computing either the homotopic \Frechet
distance, or the minimum height of a homotopy, is in NP
Algorithmic Meta-Theorems
Algorithmic meta-theorems are general algorithmic results applying to a whole
range of problems, rather than just to a single problem alone. They often have
a "logical" and a "structural" component, that is they are results of the form:
every computational problem that can be formalised in a given logic L can be
solved efficiently on every class C of structures satisfying certain
conditions. This paper gives a survey of algorithmic meta-theorems obtained in
recent years and the methods used to prove them. As many meta-theorems use
results from graph minor theory, we give a brief introduction to the theory
developed by Robertson and Seymour for their proof of the graph minor theorem
and state the main algorithmic consequences of this theory as far as they are
needed in the theory of algorithmic meta-theorems