17 research outputs found
Low Randomness Rumor Spreading via Hashing
International audienceWe consider the classical rumor spreading problem, where a piece of information must be disseminated from a single node to all n nodes of a given network. We devise two simple push-based protocols, in which nodes choose the neighbor they send the information to in each round using pairwise independent hash functions, or a pseudo-random generator, respectively. For several well-studied topologies our algorithms use exponentially fewer random bits than previous protocols. For example, in complete graphs, expanders, and random graphs only a polylogarithmic number of random bits are needed in total to spread the rumor in O(log n) rounds with high probability. Previous explicit algorithms require Omega(n) random bits to achieve the same round complexity. For complete graphs, the amount of randomness used by our hashing-based algorithm is within an O(log n)-factor of the theoretical minimum determined by [Giakkoupis and Woelfel, 2011]
Benchmark Graphs for Practical Graph Isomorphism
The state-of-the-art solvers for the graph isomorphism problem can readily
solve generic instances with tens of thousands of vertices. Indeed, experiments
show that on inputs without particular combinatorial structure the algorithms
scale almost linearly. In fact, it is non-trivial to create challenging
instances for such solvers and the number of difficult benchmark graphs
available is quite limited. We describe a construction to efficiently generate
small instances for the graph isomorphism problem that are difficult or even
infeasible for said solvers. Up to this point the only other available
instances posing challenges for isomorphism solvers were certain incidence
structures of combinatorial objects (such as projective planes, Hadamard
matrices, Latin squares, etc.). Experiments show that starting from 1500
vertices our new instances are several orders of magnitude more difficult on
comparable input sizes. More importantly, our method is generic and efficient
in the sense that one can quickly create many isomorphism instances on a
desired number of vertices. In contrast to this, said combinatorial objects are
rare and difficult to generate and with the new construction it is possible to
generate an abundance of instances of arbitrary size. Our construction hinges
on the multipedes of Gurevich and Shelah and the Cai-F\"{u}rer-Immerman gadgets
that realize a certain abelian automorphism group and have repeatedly played a
role in the context of graph isomorphism. Exploring limits of such
constructions, we also explain that there are group theoretic obstructions to
generalizing the construction with non-abelian gadgets.Comment: 32 page
A Randomized Polynomial Kernelization for Vertex Cover with a Smaller Parameter
In the Vertex Cover problem we are given a graph and an integer
and have to determine whether there is a set of size at most
such that each edge in has at least one endpoint in . The problem can be
easily solved in time , making it fixed-parameter tractable (FPT)
with respect to . While the fastest known algorithm takes only time
, much stronger improvements have been obtained by studying
parameters that are smaller than . Apart from treewidth-related results, the
arguably best algorithm for Vertex Cover runs in time , where
is only the excess of the solution size over the best
fractional vertex cover (Lokshtanov et al.\ TALG 2014). Since but
cannot be bounded in terms of alone, this strictly increases the range of
tractable instances.
Recently, Garg and Philip (SODA 2016) greatly contributed to understanding
the parameterized complexity of the Vertex Cover problem. They prove that
is a lower bound for the vertex cover size of , where
is the size of a largest matching of , and proceed to study parameter
. They give an algorithm of running time ,
proving that Vertex Cover is FPT in . It can be easily observed that
whereas cannot be bounded in terms of alone. We
complement the work of Garg and Philip by proving that Vertex Cover admits a
randomized polynomial kernelization in terms of , i.e., an efficient
preprocessing to size polynomial in . This improves over parameter
for which this was previously known (Kratsch and Wahlstr\"om FOCS
2012).Comment: Full version of ESA 2016 pape
An Efficient Parallel Algorithm for Spectral Sparsification of Laplacian and SDDM Matrix Polynomials
For "large" class of continuous probability density functions
(p.d.f.), we demonstrate that for every there is mixture of
discrete Binomial distributions (MDBD) with
distinct Binomial distributions that -approximates a
discretized p.d.f. for all , where
. Also, we give two efficient parallel
algorithms to find such MDBD.
Moreover, we propose a sequential algorithm that on input MDBD with
for that induces a discretized p.d.f. ,
that is either Laplacian or SDDM matrix and parameter ,
outputs in time a spectral
sparsifier of a matrix-polynomial, where
notation hides factors.
This improves the Cheng et al.'s [CCLPT15] algorithm whose run time is
.
Furthermore, our algorithm is parallelizable and runs in work
and depth . Our main algorithmic contribution is to
propose the first efficient parallel algorithm that on input continuous p.d.f.
, matrix as above, outputs a spectral sparsifier of
matrix-polynomial whose coefficients approximate component-wise the discretized
p.d.f. .
Our results yield the first efficient and parallel algorithm that runs in
nearly linear work and poly-logarithmic depth and analyzes the long term
behaviour of Markov chains in non-trivial settings. In addition, we strengthen
the Spielman and Peng's [PS14] parallel SDD solver
Balanced Allocation on Hypergraphs
We consider a variation of balls-into-bins which randomly allocates balls
into bins. Following Godfrey's model (SODA, 2008), we assume that each ball
, , comes with a hypergraph
, and each edge
contains at least a logarithmic number of bins. Given
, our -choice algorithm chooses an edge ,
uniformly at random, and then chooses a set of random bins from the
selected edge . The ball is allocated to a least-loaded bin from , with
ties are broken randomly. We prove that if the hypergraphs
satisfy a \emph{balancedness}
condition and have low \emph{pair visibility}, then after allocating
balls, the maximum number of balls at any bin, called the
\emph{maximum load}, is at most , with high probability. The
balancedness condition enforces that bins appear almost uniformly within the
hyperedges of , , while the pair visibility
condition measures how frequently a pair of bins is chosen during the
allocation of balls. Moreover, we establish a lower bound for the maximum load
attained by the balanced allocation for a sequence of hypergraphs in terms of
pair visibility, showing the relevance of the visibility parameter to the
maximum load. In Godfrey's model, each ball is forced to probe all bins in a
randomly selected hyperedge and the ball is then allocated in a least-loaded
bin. Godfrey showed that if each , , is
balanced and , then the maximum load is at most one, with high
probability. However, we apply the power of choices paradigm, and only
query the load information of random bins per ball, while achieving very
slow growth in the maximum load
Uniformly automatic classes of finite structures
We investigate the recently introduced concept of uniformly tree-automatic classes in the realm of parameterized complexity theory. Roughly speaking, a class of finite structures is uniformly tree-automatic if it can be presented by a set of finite trees and a tuple of automata. A tree t encodes a structure and an element of this structure is encoded by a labeling of t. The automata are used to present the relations of the structure. We use this formalism to obtain algorithmic meta-theorems for first-order logic and in some cases also monadic second-order logic on classes of finite Boolean algebras, finite groups, and graphs of bounded tree-depth. Our main concern is the efficiency of this approach with respect to the hidden parameter dependence (size of the formula). We develop a method to analyze the complexity of uniformly tree-automatic presentations, which allows us to give upper bounds for the runtime of the automata-based model checking algorithm on the presented class. It turns out that the parameter dependence is elementary for all the above mentioned classes. Additionally we show that one can lift the FPT results, which are obtained by our method, from a class C to the closure of C under direct products with only a singly exponential blow-up in the parameter dependence
On space efficiency of algorithms working on structural decompositions of graphs
Dynamic programming on path and tree decompositions of graphs is a technique
that is ubiquitous in the field of parameterized and exponential-time
algorithms. However, one of its drawbacks is that the space usage is
exponential in the decomposition's width. Following the work of Allender et al.
[Theory of Computing, '14], we investigate whether this space complexity
explosion is unavoidable. Using the idea of reparameterization of Cai and
Juedes [J. Comput. Syst. Sci., '03], we prove that the question is closely
related to a conjecture that the Longest Common Subsequence problem
parameterized by the number of input strings does not admit an algorithm that
simultaneously uses XP time and FPT space. Moreover, we complete the complexity
landscape sketched for pathwidth and treewidth by Allender et al. by
considering the parameter tree-depth. We prove that computations on tree-depth
decompositions correspond to a model of non-deterministic machines that work in
polynomial time and logarithmic space, with access to an auxiliary stack of
maximum height equal to the decomposition's depth. Together with the results of
Allender et al., this describes a hierarchy of complexity classes for
polynomial-time non-deterministic machines with different restrictions on the
access to working space, which mirrors the classic relations between treewidth,
pathwidth, and tree-depth.Comment: An extended abstract appeared in the proceedings of STACS'16. The new
version is augmented with a space-efficient algorithm for Dominating Set
using the Chinese remainder theore