39 research outputs found
Scalable Kernelization for Maximum Independent Sets
The most efficient algorithms for finding maximum independent sets in both
theory and practice use reduction rules to obtain a much smaller problem
instance called a kernel. The kernel can then be solved quickly using exact or
heuristic algorithms---or by repeatedly kernelizing recursively in the
branch-and-reduce paradigm. It is of critical importance for these algorithms
that kernelization is fast and returns a small kernel. Current algorithms are
either slow but produce a small kernel, or fast and give a large kernel. We
attempt to accomplish both of these goals simultaneously, by giving an
efficient parallel kernelization algorithm based on graph partitioning and
parallel bipartite maximum matching. We combine our parallelization techniques
with two techniques to accelerate kernelization further: dependency checking
that prunes reductions that cannot be applied, and reduction tracking that
allows us to stop kernelization when reductions become less fruitful. Our
algorithm produces kernels that are orders of magnitude smaller than the
fastest kernelization methods, while having a similar execution time.
Furthermore, our algorithm is able to compute kernels with size comparable to
the smallest known kernels, but up to two orders of magnitude faster than
previously possible. Finally, we show that our kernelization algorithm can be
used to accelerate existing state-of-the-art heuristic algorithms, allowing us
to find larger independent sets faster on large real-world networks and
synthetic instances.Comment: Extended versio
Engineering Fast Almost Optimal Algorithms for Bipartite Graph Matching
We consider the maximum cardinality matching problem in bipartite graphs. There are a number of exact, deterministic algorithms for this purpose, whose complexities are high in practice. There are randomized approaches for special classes of bipartite graphs. Random 2-out bipartite graphs, where each vertex chooses two neighbors at random from the other side, form one class for which there is an O(m+nlog n)-time Monte Carlo algorithm. Regular bipartite graphs, where all vertices have the same degree, form another class for which there is an expected O(m + nlog n)-time Las Vegas algorithm. We investigate these two algorithms and turn them into practical heuristics with randomization. Experimental results show that the heuristics are fast and obtain near optimal matchings. They are also more robust than the state of the art heuristics used in the cardinality matching algorithms, and are generally more useful as initialization routines
Algorithmes rapides quasi-optimaux pour trouver des couplages dans de graphes bipartis
International audienceWe consider the maximum cardinality matching problem in bipartite graphs.There are a number of exact, deterministic algorithms for this purpose, whose complexities are high in practice.There are randomized approaches for special classes of bipartite graphs.Random 2-out bipartite graphs, where each vertex chooses two neighbors at randomfrom the other side, form one class for which there is an -time Monte Carlo algorithm. Regular bipartite graphs, where all vertices have the same degree,form another class for which there is an expected -time Las Vegas algorithm.We investigate these two algorithms and turn them into practical heuristics with randomization.Experimental results show that the heuristics are fast and obtain near optimal matchings.They are also more robust than the state of the art heuristics used in the cardinality matching algorithms, and are generally more useful as initialization routines
The power of linear-time data reduction for matching.
Finding maximum-cardinality matchings in undirected graphs is arguably one of the most central graph primitives. For m-edge and n-vertex graphs, it is well-known to be solvable in O(m\sqrt{n}) time; however, for several applications this running time is still too slow. We investigate how linear-time (and almost linear-time) data reduction (used as preprocessing) can alleviate the situation. More specifically, we focus on linear-time kernelization. We start a deeper and systematic study both for general graphs and for bipartite graphs. Our data reduction algorithms easily comply (in form of preprocessing) with every solution strategy (exact, approximate, heuristic), thus making them attractive in various settings
Finite size scaling for the core of large random hypergraphs
The (two) core of a hypergraph is the maximal collection of hyperedges within
which no vertex appears only once. It is of importance in tasks such as
efficiently solving a large linear system over GF[2], or iterative decoding of
low-density parity-check codes used over the binary erasure channel. Similar
structures emerge in a variety of NP-hard combinatorial optimization and
decision problems, from vertex cover to satisfiability. For a uniformly chosen
random hypergraph of vertices and hyperedges, each consisting of
the same fixed number of vertices, the size of the core exhibits for
large a first-order phase transition, changing from for to a positive fraction of for , with
a transition window size around .
Analyzing the corresponding ``leaf removal'' algorithm, we determine the
associated finite-size scaling behavior. In particular, if is inside the
scaling window (more precisely, ), the
probability of having a core of size has a limit strictly between 0
and 1, and a leading correction of order . The correction
admits a sharp characterization in terms of the distribution of a Brownian
motion with quadratic shift, from which it inherits the scaling with . This
behavior is expected to be universal for a wide collection of combinatorial
problems.Comment: Published in at http://dx.doi.org/10.1214/07-AAP514 the Annals of
Applied Probability (http://www.imstat.org/aap/) by the Institute of
Mathematical Statistics (http://www.imstat.org
Sparse graphs: metrics and random models
Recently, Bollob\'as, Janson and Riordan introduced a family of random graph
models producing inhomogeneous graphs with vertices and edges
whose distribution is characterized by a kernel, i.e., a symmetric measurable
function \ka:[0,1]^2 \to [0,\infty). To understand these models, we should
like to know when different kernels \ka give rise to `similar' graphs, and,
given a real-world network, how `similar' is it to a typical graph G(n,\ka)
derived from a given kernel \ka.
The analogous questions for dense graphs, with edges, are
answered by recent results of Borgs, Chayes, Lov\'asz, S\'os, Szegedy and
Vesztergombi, who showed that several natural metrics on graphs are equivalent,
and moreover that any sequence of graphs converges in each metric to a graphon,
i.e., a kernel taking values in .
Possible generalizations of these results to graphs with but
edges are discussed in a companion paper [arXiv:0708.1919]; here we
focus only on graphs with edges, which turn out to be much harder
to handle. Many new phenomena occur, and there are a host of plausible metrics
to consider; many of these metrics suggest new random graph models, and vice
versa.Comment: 44 pages, 1 figure. This is a companion paper to arXiv:0708.1919,
consisting of an updated version of part of the original version
(arXiv:0708.1919v1), which has been split into two papers. Since v1,
references updated and other very minor changes. To appear in Random
Structures and Algorithm
Kernelization of Vertex Cover by Structural Parameters
In the NP-complete problem Vertex Cover, one is given a graph G and an integer k and are asked whether there exists a vertex set S ⊆ V (G) with size at most k such that every edge of the graph is incident to a vertex in S. In this thesis we explore techniques to solve Vertex Cover using parameterized algorithms, with a particular focus on kernelization by structural parameters. We present two new polynomial kernels for Vertex Cover, one parameterized by the size of a minimum degree-2 modulator, and one parameterized by the size of a minimum pseudoforest modulator. A degree-2 modulator is a vertex set X ⊆ V (G) such that G-X has maximum degree two, and a pseudoforest modulator is a vertex set X ⊆ V (G) such that every connected component of G-X has at most one cycle. Specifically, we provide polynomial time algorithms that for an input graph G and an integer k, outputs a graph G' and an integer k' such that G has a vertex cover of size k if and only if G' has a vertex cover of size k'. Moreover, the number of vertices of G' is bounded by O(|X|^7) where |X| is the size of a minimum degree-2 modulator for G, or bounded by O(|X|^12) where |X| is the size a minimum pseudoforest modulator for G. Our result extends known results on structural kernelization for Vertex Cover.Master i InformatikkMAMN-INFINF39