63 research outputs found
Parameterized Algorithms on Perfect Graphs for deletion to -graphs
For fixed integers , a graph is called an {\em
-graph} if the vertex set can be partitioned into
independent sets and cliques. The class of graphs
generalizes -colourable graphs (when and hence not surprisingly,
determining whether a given graph is an -graph is \NP-hard even when
or in general graphs.
When and are part of the input, then the recognition problem is
NP-hard even if the input graph is a perfect graph (where the {\sc Chromatic
Number} problem is solvable in polynomial time). It is also known to be
fixed-parameter tractable (FPT) on perfect graphs when parameterized by and
. I.e. there is an f(r+\ell) \cdot n^{\Oh(1)} algorithm on perfect
graphs on vertices where is some (exponential) function of and
.
In this paper, we consider the parameterized complexity of the following
problem, which we call {\sc Vertex Partization}. Given a perfect graph and
positive integers decide whether there exists a set of size at most such that the deletion of from results in an
-graph. We obtain the following results: \begin{enumerate} \item {\sc
Vertex Partization} on perfect graphs is FPT when parameterized by .
\item The problem does not admit any polynomial sized kernel when parameterized
by . In other words, in polynomial time, the input graph can not be
compressed to an equivalent instance of size polynomial in . In fact,
our result holds even when .
\item When are universal constants, then {\sc Vertex Partization} on
perfect graphs, parameterized by , has a polynomial sized kernel.
\end{enumerate
Compression via Matroids: A Randomized Polynomial Kernel for Odd Cycle Transversal
The Odd Cycle Transversal problem (OCT) asks whether a given graph can be
made bipartite by deleting at most of its vertices. In a breakthrough
result Reed, Smith, and Vetta (Operations Research Letters, 2004) gave a
\BigOh(4^kkmn) time algorithm for it, the first algorithm with polynomial
runtime of uniform degree for every fixed . It is known that this implies a
polynomial-time compression algorithm that turns OCT instances into equivalent
instances of size at most \BigOh(4^k), a so-called kernelization. Since then
the existence of a polynomial kernel for OCT, i.e., a kernelization with size
bounded polynomially in , has turned into one of the main open questions in
the study of kernelization.
This work provides the first (randomized) polynomial kernelization for OCT.
We introduce a novel kernelization approach based on matroid theory, where we
encode all relevant information about a problem instance into a matroid with a
representation of size polynomial in . For OCT, the matroid is built to
allow us to simulate the computation of the iterative compression step of the
algorithm of Reed, Smith, and Vetta, applied (for only one round) to an
approximate odd cycle transversal which it is aiming to shrink to size . The
process is randomized with one-sided error exponentially small in , where
the result can contain false positives but no false negatives, and the size
guarantee is cubic in the size of the approximate solution. Combined with an
\BigOh(\sqrt{\log n})-approximation (Agarwal et al., STOC 2005), we get a
reduction of the instance to size \BigOh(k^{4.5}), implying a randomized
polynomial kernelization.Comment: Minor changes to agree with SODA 2012 version of the pape
On the (non-)existence of polynomial kernels for Pl-free edge modification problems
Given a graph G = (V,E) and an integer k, an edge modification problem for a
graph property P consists in deciding whether there exists a set of edges F of
size at most k such that the graph H = (V,E \vartriangle F) satisfies the
property P. In the P edge-completion problem, the set F of edges is constrained
to be disjoint from E; in the P edge-deletion problem, F is a subset of E; no
constraint is imposed on F in the P edge-edition problem. A number of
optimization problems can be expressed in terms of graph modification problems
which have been extensively studied in the context of parameterized complexity.
When parameterized by the size k of the edge set F, it has been proved that if
P is an hereditary property characterized by a finite set of forbidden induced
subgraphs, then the three P edge-modification problems are FPT. It was then
natural to ask whether these problems also admit a polynomial size kernel.
Using recent lower bound techniques, Kratsch and Wahlstrom answered this
question negatively. However, the problem remains open on many natural graph
classes characterized by forbidden induced subgraphs. Kratsch and Wahlstrom
asked whether the result holds when the forbidden subgraphs are paths or cycles
and pointed out that the problem is already open in the case of P4-free graphs
(i.e. cographs). This paper provides positive and negative results in that line
of research. We prove that parameterized cograph edge modification problems
have cubic vertex kernels whereas polynomial kernels are unlikely to exist for
the Pl-free and Cl-free edge-deletion problems for large enough l
Point Line Cover: The Easy Kernel is Essentially Tight
The input to the NP-hard Point Line Cover problem (PLC) consists of a set
of points on the plane and a positive integer , and the question is
whether there exists a set of at most lines which pass through all points
in . A simple polynomial-time reduction reduces any input to one with at
most points. We show that this is essentially tight under standard
assumptions. More precisely, unless the polynomial hierarchy collapses to its
third level, there is no polynomial-time algorithm that reduces every instance
of PLC to an equivalent instance with points, for
any . This answers, in the negative, an open problem posed by
Lokshtanov (PhD Thesis, 2009).
Our proof uses the machinery for deriving lower bounds on the size of kernels
developed by Dell and van Melkebeek (STOC 2010). It has two main ingredients:
We first show, by reduction from Vertex Cover, that PLC---conditionally---has
no kernel of total size bits. This does not directly imply
the claimed lower bound on the number of points, since the best known
polynomial-time encoding of a PLC instance with points requires
bits. To get around this we build on work of Goodman et al.
(STOC 1989) and devise an oracle communication protocol of cost
for PLC; its main building block is a bound of for the order
types of points that are not necessarily in general position, and an
explicit algorithm that enumerates all possible order types of n points. This
protocol and the lower bound on total size together yield the stated lower
bound on the number of points.
While a number of essentially tight polynomial lower bounds on total sizes of
kernels are known, our result is---to the best of our knowledge---the first to
show a nontrivial lower bound for structural/secondary parameters
Tight Kernel Bounds for Problems on Graphs with Small Degeneracy
In this paper we consider kernelization for problems on d-degenerate graphs,
i.e. graphs such that any subgraph contains a vertex of degree at most .
This graph class generalizes many classes of graphs for which effective
kernelization is known to exist, e.g. planar graphs, H-minor free graphs, and
H-topological-minor free graphs. We show that for several natural problems on
d-degenerate graphs the best known kernelization upper bounds are essentially
tight.Comment: Full version of ESA 201
A Hierarchy of Polynomial Kernels
In parameterized algorithmics, the process of kernelization is defined as a
polynomial time algorithm that transforms the instance of a given problem to an
equivalent instance of a size that is limited by a function of the parameter.
As, afterwards, this smaller instance can then be solved to find an answer to
the original question, kernelization is often presented as a form of
preprocessing. A natural generalization of kernelization is the process that
allows for a number of smaller instances to be produced to provide an answer to
the original problem, possibly also using negation. This generalization is
called Turing kernelization. Immediately, questions of equivalence occur or,
when is one form possible and not the other. These have been long standing open
problems in parameterized complexity. In the present paper, we answer many of
these. In particular, we show that Turing kernelizations differ not only from
regular kernelization, but also from intermediate forms as truth-table
kernelizations. We achieve absolute results by diagonalizations and also
results on natural problems depending on widely accepted complexity theoretic
assumptions. In particular, we improve on known lower bounds for the kernel
size of compositional problems using these assumptions
- …