161 research outputs found
Lower bounds for kernelizations
"Vegeu el resum a l'inici del document del fitxer adjunt"
Compression via Matroids: A Randomized Polynomial Kernel for Odd Cycle Transversal
The Odd Cycle Transversal problem (OCT) asks whether a given graph can be
made bipartite by deleting at most of its vertices. In a breakthrough
result Reed, Smith, and Vetta (Operations Research Letters, 2004) gave a
\BigOh(4^kkmn) time algorithm for it, the first algorithm with polynomial
runtime of uniform degree for every fixed . It is known that this implies a
polynomial-time compression algorithm that turns OCT instances into equivalent
instances of size at most \BigOh(4^k), a so-called kernelization. Since then
the existence of a polynomial kernel for OCT, i.e., a kernelization with size
bounded polynomially in , has turned into one of the main open questions in
the study of kernelization.
This work provides the first (randomized) polynomial kernelization for OCT.
We introduce a novel kernelization approach based on matroid theory, where we
encode all relevant information about a problem instance into a matroid with a
representation of size polynomial in . For OCT, the matroid is built to
allow us to simulate the computation of the iterative compression step of the
algorithm of Reed, Smith, and Vetta, applied (for only one round) to an
approximate odd cycle transversal which it is aiming to shrink to size . The
process is randomized with one-sided error exponentially small in , where
the result can contain false positives but no false negatives, and the size
guarantee is cubic in the size of the approximate solution. Combined with an
\BigOh(\sqrt{\log n})-approximation (Agarwal et al., STOC 2005), we get a
reduction of the instance to size \BigOh(k^{4.5}), implying a randomized
polynomial kernelization.Comment: Minor changes to agree with SODA 2012 version of the pape
Hierarchies of Inefficient Kernelizability
The framework of Bodlaender et al. (ICALP 2008) and Fortnow and Santhanam
(STOC 2008) allows us to exclude the existence of polynomial kernels for a
range of problems under reasonable complexity-theoretical assumptions. However,
there are also some issues that are not addressed by this framework, including
the existence of Turing kernels such as the "kernelization" of Leaf Out
Branching(k) into a disjunction over n instances of size poly(k). Observing
that Turing kernels are preserved by polynomial parametric transformations, we
define a kernelization hardness hierarchy, akin to the M- and W-hierarchy of
ordinary parameterized complexity, by the PPT-closure of problems that seem
likely to be fundamentally hard for efficient Turing kernelization. We find
that several previously considered problems are complete for our fundamental
hardness class, including Min Ones d-SAT(k), Binary NDTM Halting(k), Connected
Vertex Cover(k), and Clique(k log n), the clique problem parameterized by k log
n
A Hierarchy of Polynomial Kernels
In parameterized algorithmics, the process of kernelization is defined as a
polynomial time algorithm that transforms the instance of a given problem to an
equivalent instance of a size that is limited by a function of the parameter.
As, afterwards, this smaller instance can then be solved to find an answer to
the original question, kernelization is often presented as a form of
preprocessing. A natural generalization of kernelization is the process that
allows for a number of smaller instances to be produced to provide an answer to
the original problem, possibly also using negation. This generalization is
called Turing kernelization. Immediately, questions of equivalence occur or,
when is one form possible and not the other. These have been long standing open
problems in parameterized complexity. In the present paper, we answer many of
these. In particular, we show that Turing kernelizations differ not only from
regular kernelization, but also from intermediate forms as truth-table
kernelizations. We achieve absolute results by diagonalizations and also
results on natural problems depending on widely accepted complexity theoretic
assumptions. In particular, we improve on known lower bounds for the kernel
size of compositional problems using these assumptions
Polynomial Kernels for Weighted Problems
Kernelization is a formalization of efficient preprocessing for NP-hard
problems using the framework of parameterized complexity. Among open problems
in kernelization it has been asked many times whether there are deterministic
polynomial kernelizations for Subset Sum and Knapsack when parameterized by the
number of items.
We answer both questions affirmatively by using an algorithm for compressing
numbers due to Frank and Tardos (Combinatorica 1987). This result had been
first used by Marx and V\'egh (ICALP 2013) in the context of kernelization. We
further illustrate its applicability by giving polynomial kernels also for
weighted versions of several well-studied parameterized problems. Furthermore,
when parameterized by the different item sizes we obtain a polynomial
kernelization for Subset Sum and an exponential kernelization for Knapsack.
Finally, we also obtain kernelization results for polynomial integer programs
A shortcut to (sun)flowers: Kernels in logarithmic space or linear time
We investigate whether kernelization results can be obtained if we restrict
kernelization algorithms to run in logarithmic space. This restriction for
kernelization is motivated by the question of what results are attainable for
preprocessing via simple and/or local reduction rules. We find kernelizations
for d-Hitting Set(k), d-Set Packing(k), Edge Dominating Set(k) and a number of
hitting and packing problems in graphs, each running in logspace. Additionally,
we return to the question of linear-time kernelization. For d-Hitting Set(k) a
linear-time kernelization was given by van Bevern [Algorithmica (2014)]. We
give a simpler procedure and save a large constant factor in the size bound.
Furthermore, we show that we can obtain a linear-time kernel for d-Set
Packing(k) as well.Comment: 18 page
On Kernelization for Edge Dominating Set under Structural Parameters
In the NP-hard Edge Dominating Set problem (EDS) we are given a graph G=(V,E) and an integer k, and need to determine whether there is a set F subseteq E of at most k edges that are incident with all (other) edges of G. It is known that this problem is fixed-parameter tractable and admits a polynomial kernelization when parameterized by k. A caveat for this parameter is that it needs to be large, i.e., at least equal to half the size of a maximum matching of G, for instances not to be trivially negative. Motivated by this, we study the existence of polynomial kernelizations for EDS when parameterized by structural parameters that may be much smaller than k.
Unfortunately, at first glance this looks rather hopeless: Even when parameterized by the deletion distance to a disjoint union of paths P_3 of length two there is no polynomial kernelization (under standard assumptions), ruling out polynomial kernelizations for many smaller parameters like the feedback vertex set size. In contrast, somewhat surprisingly, there is a polynomial kernelization for deletion distance to a disjoint union of paths P_5 of length four. As our main result, we fully classify for all finite sets H of graphs, whether a kernel size polynomial in |X| is possible when given X such that each connected component of G-X is isomorphic to a graph in H
Towards Work-Efficient Parallel Parameterized Algorithms
Parallel parameterized complexity theory studies how fixed-parameter
tractable (fpt) problems can be solved in parallel. Previous theoretical work
focused on parallel algorithms that are very fast in principle, but did not
take into account that when we only have a small number of processors (between
2 and, say, 1024), it is more important that the parallel algorithms are
work-efficient. In the present paper we investigate how work-efficient fpt
algorithms can be designed. We review standard methods from fpt theory, like
kernelization, search trees, and interleaving, and prove trade-offs for them
between work efficiency and runtime improvements. This results in a toolbox for
developing work-efficient parallel fpt algorithms.Comment: Prior full version of the paper that will appear in Proceedings of
the 13th International Conference and Workshops on Algorithms and Computation
(WALCOM 2019), February 27 - March 02, 2019, Guwahati, India. The final
authenticated version is available online at
https://doi.org/10.1007/978-3-030-10564-8_2
- …