63 research outputs found

    Parameterized Algorithms on Perfect Graphs for deletion to (r,)(r,\ell)-graphs

    Get PDF
    For fixed integers r,0r,\ell \geq 0, a graph GG is called an {\em (r,)(r,\ell)-graph} if the vertex set V(G)V(G) can be partitioned into rr independent sets and \ell cliques. The class of (r,)(r, \ell) graphs generalizes rr-colourable graphs (when =0)\ell =0) and hence not surprisingly, determining whether a given graph is an (r,)(r, \ell)-graph is \NP-hard even when r3r \geq 3 or 3\ell \geq 3 in general graphs. When rr and \ell are part of the input, then the recognition problem is NP-hard even if the input graph is a perfect graph (where the {\sc Chromatic Number} problem is solvable in polynomial time). It is also known to be fixed-parameter tractable (FPT) on perfect graphs when parameterized by rr and \ell. I.e. there is an f(r+\ell) \cdot n^{\Oh(1)} algorithm on perfect graphs on nn vertices where ff is some (exponential) function of rr and \ell. In this paper, we consider the parameterized complexity of the following problem, which we call {\sc Vertex Partization}. Given a perfect graph GG and positive integers r,,kr,\ell,k decide whether there exists a set SV(G)S\subseteq V(G) of size at most kk such that the deletion of SS from GG results in an (r,)(r,\ell)-graph. We obtain the following results: \begin{enumerate} \item {\sc Vertex Partization} on perfect graphs is FPT when parameterized by k+r+k+r+\ell. \item The problem does not admit any polynomial sized kernel when parameterized by k+r+k+r+\ell. In other words, in polynomial time, the input graph can not be compressed to an equivalent instance of size polynomial in k+r+k+r+\ell. In fact, our result holds even when k=0k=0. \item When r,r,\ell are universal constants, then {\sc Vertex Partization} on perfect graphs, parameterized by kk, has a polynomial sized kernel. \end{enumerate

    Compression via Matroids: A Randomized Polynomial Kernel for Odd Cycle Transversal

    Full text link
    The Odd Cycle Transversal problem (OCT) asks whether a given graph can be made bipartite by deleting at most kk of its vertices. In a breakthrough result Reed, Smith, and Vetta (Operations Research Letters, 2004) gave a \BigOh(4^kkmn) time algorithm for it, the first algorithm with polynomial runtime of uniform degree for every fixed kk. It is known that this implies a polynomial-time compression algorithm that turns OCT instances into equivalent instances of size at most \BigOh(4^k), a so-called kernelization. Since then the existence of a polynomial kernel for OCT, i.e., a kernelization with size bounded polynomially in kk, has turned into one of the main open questions in the study of kernelization. This work provides the first (randomized) polynomial kernelization for OCT. We introduce a novel kernelization approach based on matroid theory, where we encode all relevant information about a problem instance into a matroid with a representation of size polynomial in kk. For OCT, the matroid is built to allow us to simulate the computation of the iterative compression step of the algorithm of Reed, Smith, and Vetta, applied (for only one round) to an approximate odd cycle transversal which it is aiming to shrink to size kk. The process is randomized with one-sided error exponentially small in kk, where the result can contain false positives but no false negatives, and the size guarantee is cubic in the size of the approximate solution. Combined with an \BigOh(\sqrt{\log n})-approximation (Agarwal et al., STOC 2005), we get a reduction of the instance to size \BigOh(k^{4.5}), implying a randomized polynomial kernelization.Comment: Minor changes to agree with SODA 2012 version of the pape

    On the (non-)existence of polynomial kernels for Pl-free edge modification problems

    Full text link
    Given a graph G = (V,E) and an integer k, an edge modification problem for a graph property P consists in deciding whether there exists a set of edges F of size at most k such that the graph H = (V,E \vartriangle F) satisfies the property P. In the P edge-completion problem, the set F of edges is constrained to be disjoint from E; in the P edge-deletion problem, F is a subset of E; no constraint is imposed on F in the P edge-edition problem. A number of optimization problems can be expressed in terms of graph modification problems which have been extensively studied in the context of parameterized complexity. When parameterized by the size k of the edge set F, it has been proved that if P is an hereditary property characterized by a finite set of forbidden induced subgraphs, then the three P edge-modification problems are FPT. It was then natural to ask whether these problems also admit a polynomial size kernel. Using recent lower bound techniques, Kratsch and Wahlstrom answered this question negatively. However, the problem remains open on many natural graph classes characterized by forbidden induced subgraphs. Kratsch and Wahlstrom asked whether the result holds when the forbidden subgraphs are paths or cycles and pointed out that the problem is already open in the case of P4-free graphs (i.e. cographs). This paper provides positive and negative results in that line of research. We prove that parameterized cograph edge modification problems have cubic vertex kernels whereas polynomial kernels are unlikely to exist for the Pl-free and Cl-free edge-deletion problems for large enough l

    Point Line Cover: The Easy Kernel is Essentially Tight

    Get PDF
    The input to the NP-hard Point Line Cover problem (PLC) consists of a set PP of nn points on the plane and a positive integer kk, and the question is whether there exists a set of at most kk lines which pass through all points in PP. A simple polynomial-time reduction reduces any input to one with at most k2k^2 points. We show that this is essentially tight under standard assumptions. More precisely, unless the polynomial hierarchy collapses to its third level, there is no polynomial-time algorithm that reduces every instance (P,k)(P,k) of PLC to an equivalent instance with O(k2ϵ)O(k^{2-\epsilon}) points, for any ϵ>0\epsilon>0. This answers, in the negative, an open problem posed by Lokshtanov (PhD Thesis, 2009). Our proof uses the machinery for deriving lower bounds on the size of kernels developed by Dell and van Melkebeek (STOC 2010). It has two main ingredients: We first show, by reduction from Vertex Cover, that PLC---conditionally---has no kernel of total size O(k2ϵ)O(k^{2-\epsilon}) bits. This does not directly imply the claimed lower bound on the number of points, since the best known polynomial-time encoding of a PLC instance with nn points requires ω(n2)\omega(n^{2}) bits. To get around this we build on work of Goodman et al. (STOC 1989) and devise an oracle communication protocol of cost O(nlogn)O(n\log n) for PLC; its main building block is a bound of O(nO(n))O(n^{O(n)}) for the order types of nn points that are not necessarily in general position, and an explicit algorithm that enumerates all possible order types of n points. This protocol and the lower bound on total size together yield the stated lower bound on the number of points. While a number of essentially tight polynomial lower bounds on total sizes of kernels are known, our result is---to the best of our knowledge---the first to show a nontrivial lower bound for structural/secondary parameters

    Tight Kernel Bounds for Problems on Graphs with Small Degeneracy

    Full text link
    In this paper we consider kernelization for problems on d-degenerate graphs, i.e. graphs such that any subgraph contains a vertex of degree at most dd. This graph class generalizes many classes of graphs for which effective kernelization is known to exist, e.g. planar graphs, H-minor free graphs, and H-topological-minor free graphs. We show that for several natural problems on d-degenerate graphs the best known kernelization upper bounds are essentially tight.Comment: Full version of ESA 201

    A Hierarchy of Polynomial Kernels

    Full text link
    In parameterized algorithmics, the process of kernelization is defined as a polynomial time algorithm that transforms the instance of a given problem to an equivalent instance of a size that is limited by a function of the parameter. As, afterwards, this smaller instance can then be solved to find an answer to the original question, kernelization is often presented as a form of preprocessing. A natural generalization of kernelization is the process that allows for a number of smaller instances to be produced to provide an answer to the original problem, possibly also using negation. This generalization is called Turing kernelization. Immediately, questions of equivalence occur or, when is one form possible and not the other. These have been long standing open problems in parameterized complexity. In the present paper, we answer many of these. In particular, we show that Turing kernelizations differ not only from regular kernelization, but also from intermediate forms as truth-table kernelizations. We achieve absolute results by diagonalizations and also results on natural problems depending on widely accepted complexity theoretic assumptions. In particular, we improve on known lower bounds for the kernel size of compositional problems using these assumptions
    corecore