25 research outputs found
A Linear Vertex Kernel for Maximum Internal Spanning Tree
We present a polynomial time algorithm that for any graph G and integer k >=
0, either finds a spanning tree with at least k internal vertices, or outputs a
new graph G' on at most 3k vertices and an integer k' such that G has a
spanning tree with at least k internal vertices if and only if G' has a
spanning tree with at least k' internal vertices. In other words, we show that
the Maximum Internal Spanning Tree problem parameterized by the number of
internal vertices k, has a 3k-vertex kernel. Our result is based on an
innovative application of a classical min-max result about hypertrees in
hypergraphs which states that "a hypergraph H contains a hypertree if and only
if H is partition connected.
Scalable Kernelization for Maximum Independent Sets
The most efficient algorithms for finding maximum independent sets in both
theory and practice use reduction rules to obtain a much smaller problem
instance called a kernel. The kernel can then be solved quickly using exact or
heuristic algorithms---or by repeatedly kernelizing recursively in the
branch-and-reduce paradigm. It is of critical importance for these algorithms
that kernelization is fast and returns a small kernel. Current algorithms are
either slow but produce a small kernel, or fast and give a large kernel. We
attempt to accomplish both of these goals simultaneously, by giving an
efficient parallel kernelization algorithm based on graph partitioning and
parallel bipartite maximum matching. We combine our parallelization techniques
with two techniques to accelerate kernelization further: dependency checking
that prunes reductions that cannot be applied, and reduction tracking that
allows us to stop kernelization when reductions become less fruitful. Our
algorithm produces kernels that are orders of magnitude smaller than the
fastest kernelization methods, while having a similar execution time.
Furthermore, our algorithm is able to compute kernels with size comparable to
the smallest known kernels, but up to two orders of magnitude faster than
previously possible. Finally, we show that our kernelization algorithm can be
used to accelerate existing state-of-the-art heuristic algorithms, allowing us
to find larger independent sets faster on large real-world networks and
synthetic instances.Comment: Extended versio
A Generalization of Nemhauser and Trotter's Local Optimization Theorem
The Nemhauser-Trotter local optimization theorem applies to the NP-hard
Vertex Cover problem and has applications in approximation as well as
parameterized algorithmics. We present a framework that generalizes Nemhauser
and Trotter's result to vertex deletion and graph packing problems, introducing
novel algorithmic strategies based on purely combinatorial arguments (not
referring to linear programming as the Nemhauser-Trotter result originally
did). We exhibit our framework using a generalization of Vertex Cover, called
Bounded- Degree Deletion, that has promise to become an important tool in the
analysis of gene and other biological networks. For some fixed d \geq 0,
Bounded-Degree Deletion asks to delete as few vertices as possible from a graph
in order to transform it into a graph with maximum vertex degree at most d.
Vertex Cover is the special case of d = 0. Our generalization of the
Nemhauser-Trotter theorem implies that Bounded-Degree Deletion has a problem
kernel with a linear number of vertices for every constant d. We also outline
an application of our extremal combinatorial approach to the problem of packing
stars with a bounded number of leaves. Finally, charting the border between
(parameterized) tractability and intractability for Bounded-Degree Deletion, we
provide a W[2]-hardness result for Bounded-Degree Deletion in case of unbounded
d-values
Exploiting -Closure in Kernelization Algorithms for Graph Problems
A graph is c-closed if every pair of vertices with at least c common
neighbors is adjacent. The c-closure of a graph G is the smallest number such
that G is c-closed. Fox et al. [ICALP '18] defined c-closure and investigated
it in the context of clique enumeration. We show that c-closure can be applied
in kernelization algorithms for several classic graph problems. We show that
Dominating Set admits a kernel of size k^O(c), that Induced Matching admits a
kernel with O(c^7*k^8) vertices, and that Irredundant Set admits a kernel with
O(c^(5/2)*k^3) vertices. Our kernelization exploits the fact that c-closed
graphs have polynomially-bounded Ramsey numbers, as we show
Fixed-Parameter Algorithms in Analysis of Heuristics for Extracting Networks in Linear Programs
We consider the problem of extracting a maximum-size reflected network in a
linear program. This problem has been studied before and a state-of-the-art SGA
heuristic with two variations have been proposed.
In this paper we apply a new approach to evaluate the quality of SGA\@. In
particular, we solve majority of the instances in the testbed to optimality
using a new fixed-parameter algorithm, i.e., an algorithm whose runtime is
polynomial in the input size but exponential in terms of an additional
parameter associated with the given problem.
This analysis allows us to conclude that the the existing SGA heuristic, in
fact, produces solutions of a very high quality and often reaches the optimal
objective values. However, SGA contain two components which leave some space
for improvement: building of a spanning tree and searching for an independent
set in a graph. In the hope of obtaining even better heuristic, we tried to
replace both of these components with some equivalent algorithms.
We tried to use a fixed-parameter algorithm instead of a greedy one for
searching of an independent set. But even the exact solution of this subproblem
improved the whole heuristic insignificantly. Hence, the crucial part of SGA is
building of a spanning tree. We tried three different algorithms, and it
appears that the Depth-First search is clearly superior to the other ones in
building of the spanning tree for SGA.
Thereby, by application of fixed-parameter algorithms, we managed to check
that the existing SGA heuristic is of a high quality and selected the component
which required an improvement. This allowed us to intensify the research in a
proper direction which yielded a superior variation of SGA
On bounded block decomposition problems for under-specified systems of equations
When solving a system of equations, it can be beneficial not to solve it in its entirety at once, but rather to decompose it into smaller subsystems that can be solved in order. Based on a bisimplicial graph representation we analyze the parameterized complexity of two problems central to such a decomposition: The Free Square Block problem related to finding smallest subsystems that can be solved separately, and the Bounded Block Decomposition problem related to determining a decomposition where the largest subsystem is as small as possible. We show both problems to be W[1]-hard. Finally we relate these problems to crown structures and settle two open questions regarding them using our results
Hamiltonicity below Dirac's condition
Dirac's theorem (1952) is a classical result of graph theory, stating that an
-vertex graph () is Hamiltonian if every vertex has degree at
least . Both the value and the requirement for every vertex to have
high degree are necessary for the theorem to hold.
In this work we give efficient algorithms for determining Hamiltonicity when
either of the two conditions are relaxed. More precisely, we show that the
Hamiltonian cycle problem can be solved in time , for some
fixed constant , if at least vertices have degree at least , or
if all vertices have degree at least . The running time is, in both
cases, asymptotically optimal, under the exponential-time hypothesis (ETH).
The results extend the range of tractability of the Hamiltonian cycle
problem, showing that it is fixed-parameter tractable when parameterized below
a natural bound. In addition, for the first parameterization we show that a
kernel with vertices can be found in polynomial time