2,583 research outputs found
Streaming Kernelization
Kernelization is a formalization of preprocessing for combinatorially hard
problems. We modify the standard definition for kernelization, which allows any
polynomial-time algorithm for the preprocessing, by requiring instead that the
preprocessing runs in a streaming setting and uses
bits of memory on instances . We obtain
several results in this new setting, depending on the number of passes over the
input that such a streaming kernelization is allowed to make. Edge Dominating
Set turns out as an interesting example because it has no single-pass
kernelization but two passes over the input suffice to match the bounds of the
best standard kernelization
Polynomial Kernels for Weighted Problems
Kernelization is a formalization of efficient preprocessing for NP-hard
problems using the framework of parameterized complexity. Among open problems
in kernelization it has been asked many times whether there are deterministic
polynomial kernelizations for Subset Sum and Knapsack when parameterized by the
number of items.
We answer both questions affirmatively by using an algorithm for compressing
numbers due to Frank and Tardos (Combinatorica 1987). This result had been
first used by Marx and V\'egh (ICALP 2013) in the context of kernelization. We
further illustrate its applicability by giving polynomial kernels also for
weighted versions of several well-studied parameterized problems. Furthermore,
when parameterized by the different item sizes we obtain a polynomial
kernelization for Subset Sum and an exponential kernelization for Knapsack.
Finally, we also obtain kernelization results for polynomial integer programs
A shortcut to (sun)flowers: Kernels in logarithmic space or linear time
We investigate whether kernelization results can be obtained if we restrict
kernelization algorithms to run in logarithmic space. This restriction for
kernelization is motivated by the question of what results are attainable for
preprocessing via simple and/or local reduction rules. We find kernelizations
for d-Hitting Set(k), d-Set Packing(k), Edge Dominating Set(k) and a number of
hitting and packing problems in graphs, each running in logspace. Additionally,
we return to the question of linear-time kernelization. For d-Hitting Set(k) a
linear-time kernelization was given by van Bevern [Algorithmica (2014)]. We
give a simpler procedure and save a large constant factor in the size bound.
Furthermore, we show that we can obtain a linear-time kernel for d-Set
Packing(k) as well.Comment: 18 page
Tight Kernel Bounds for Problems on Graphs with Small Degeneracy
In this paper we consider kernelization for problems on d-degenerate graphs,
i.e. graphs such that any subgraph contains a vertex of degree at most .
This graph class generalizes many classes of graphs for which effective
kernelization is known to exist, e.g. planar graphs, H-minor free graphs, and
H-topological-minor free graphs. We show that for several natural problems on
d-degenerate graphs the best known kernelization upper bounds are essentially
tight.Comment: Full version of ESA 201
Scalable Kernelization for Maximum Independent Sets
The most efficient algorithms for finding maximum independent sets in both
theory and practice use reduction rules to obtain a much smaller problem
instance called a kernel. The kernel can then be solved quickly using exact or
heuristic algorithms---or by repeatedly kernelizing recursively in the
branch-and-reduce paradigm. It is of critical importance for these algorithms
that kernelization is fast and returns a small kernel. Current algorithms are
either slow but produce a small kernel, or fast and give a large kernel. We
attempt to accomplish both of these goals simultaneously, by giving an
efficient parallel kernelization algorithm based on graph partitioning and
parallel bipartite maximum matching. We combine our parallelization techniques
with two techniques to accelerate kernelization further: dependency checking
that prunes reductions that cannot be applied, and reduction tracking that
allows us to stop kernelization when reductions become less fruitful. Our
algorithm produces kernels that are orders of magnitude smaller than the
fastest kernelization methods, while having a similar execution time.
Furthermore, our algorithm is able to compute kernels with size comparable to
the smallest known kernels, but up to two orders of magnitude faster than
previously possible. Finally, we show that our kernelization algorithm can be
used to accelerate existing state-of-the-art heuristic algorithms, allowing us
to find larger independent sets faster on large real-world networks and
synthetic instances.Comment: Extended versio
Compression via Matroids: A Randomized Polynomial Kernel for Odd Cycle Transversal
The Odd Cycle Transversal problem (OCT) asks whether a given graph can be
made bipartite by deleting at most of its vertices. In a breakthrough
result Reed, Smith, and Vetta (Operations Research Letters, 2004) gave a
\BigOh(4^kkmn) time algorithm for it, the first algorithm with polynomial
runtime of uniform degree for every fixed . It is known that this implies a
polynomial-time compression algorithm that turns OCT instances into equivalent
instances of size at most \BigOh(4^k), a so-called kernelization. Since then
the existence of a polynomial kernel for OCT, i.e., a kernelization with size
bounded polynomially in , has turned into one of the main open questions in
the study of kernelization.
This work provides the first (randomized) polynomial kernelization for OCT.
We introduce a novel kernelization approach based on matroid theory, where we
encode all relevant information about a problem instance into a matroid with a
representation of size polynomial in . For OCT, the matroid is built to
allow us to simulate the computation of the iterative compression step of the
algorithm of Reed, Smith, and Vetta, applied (for only one round) to an
approximate odd cycle transversal which it is aiming to shrink to size . The
process is randomized with one-sided error exponentially small in , where
the result can contain false positives but no false negatives, and the size
guarantee is cubic in the size of the approximate solution. Combined with an
\BigOh(\sqrt{\log n})-approximation (Agarwal et al., STOC 2005), we get a
reduction of the instance to size \BigOh(k^{4.5}), implying a randomized
polynomial kernelization.Comment: Minor changes to agree with SODA 2012 version of the pape
Paradigms for Parameterized Enumeration
The aim of the paper is to examine the computational complexity and
algorithmics of enumeration, the task to output all solutions of a given
problem, from the point of view of parameterized complexity. First we define
formally different notions of efficient enumeration in the context of
parameterized complexity. Second we show how different algorithmic paradigms
can be used in order to get parameter-efficient enumeration algorithms in a
number of examples. These paradigms use well-known principles from the design
of parameterized decision as well as enumeration techniques, like for instance
kernelization and self-reducibility. The concept of kernelization, in
particular, leads to a characterization of fixed-parameter tractable
enumeration problems.Comment: Accepted for MFCS 2013; long version of the pape
Meta-Kernelization using Well-Structured Modulators
Kernelization investigates exact preprocessing algorithms with performance
guarantees. The most prevalent type of parameters used in kernelization is the
solution size for optimization problems; however, also structural parameters
have been successfully used to obtain polynomial kernels for a wide range of
problems. Many of these parameters can be defined as the size of a smallest
modulator of the given graph into a fixed graph class (i.e., a set of vertices
whose deletion puts the graph into the graph class). Such parameters admit the
construction of polynomial kernels even when the solution size is large or not
applicable. This work follows up on the research on meta-kernelization
frameworks in terms of structural parameters.
We develop a class of parameters which are based on a more general view on
modulators: instead of size, the parameters employ a combination of rank-width
and split decompositions to measure structure inside the modulator. This allows
us to lift kernelization results from modulator-size to more general
parameters, hence providing smaller kernels. We show (i) how such large but
well-structured modulators can be efficiently approximated, (ii) how they can
be used to obtain polynomial kernels for any graph problem expressible in
Monadic Second Order logic, and (iii) how they allow the extension of previous
results in the area of structural meta-kernelization
Kernelizations for the hybridization number problem on multiple nonbinary trees
Given a finite set , a collection of rooted phylogenetic
trees on and an integer , the Hybridization Number problem asks if there
exists a phylogenetic network on that displays all trees from
and has reticulation number at most . We show two kernelization algorithms
for Hybridization Number, with kernel sizes and
respectively, with the number of input trees and their maximum
outdegree. Experiments on simulated data demonstrate the practical relevance of
these kernelization algorithms. In addition, we present an -time
algorithm, with and some computable function of
- …