268 research outputs found

    Compression via Matroids: A Randomized Polynomial Kernel for Odd Cycle Transversal

    Full text link
    The Odd Cycle Transversal problem (OCT) asks whether a given graph can be made bipartite by deleting at most kk of its vertices. In a breakthrough result Reed, Smith, and Vetta (Operations Research Letters, 2004) gave a \BigOh(4^kkmn) time algorithm for it, the first algorithm with polynomial runtime of uniform degree for every fixed kk. It is known that this implies a polynomial-time compression algorithm that turns OCT instances into equivalent instances of size at most \BigOh(4^k), a so-called kernelization. Since then the existence of a polynomial kernel for OCT, i.e., a kernelization with size bounded polynomially in kk, has turned into one of the main open questions in the study of kernelization. This work provides the first (randomized) polynomial kernelization for OCT. We introduce a novel kernelization approach based on matroid theory, where we encode all relevant information about a problem instance into a matroid with a representation of size polynomial in kk. For OCT, the matroid is built to allow us to simulate the computation of the iterative compression step of the algorithm of Reed, Smith, and Vetta, applied (for only one round) to an approximate odd cycle transversal which it is aiming to shrink to size kk. The process is randomized with one-sided error exponentially small in kk, where the result can contain false positives but no false negatives, and the size guarantee is cubic in the size of the approximate solution. Combined with an \BigOh(\sqrt{\log n})-approximation (Agarwal et al., STOC 2005), we get a reduction of the instance to size \BigOh(k^{4.5}), implying a randomized polynomial kernelization.Comment: Minor changes to agree with SODA 2012 version of the pape

    Hitting forbidden minors: Approximation and Kernelization

    Get PDF
    We study a general class of problems called F-deletion problems. In an F-deletion problem, we are asked whether a subset of at most kk vertices can be deleted from a graph GG such that the resulting graph does not contain as a minor any graph from the family F of forbidden minors. We obtain a number of algorithmic results on the F-deletion problem when F contains a planar graph. We give (1) a linear vertex kernel on graphs excluding tt-claw K1,tK_{1,t}, the star with tt leves, as an induced subgraph, where tt is a fixed integer. (2) an approximation algorithm achieving an approximation ratio of O(log3/2OPT)O(\log^{3/2} OPT), where OPTOPT is the size of an optimal solution on general undirected graphs. Finally, we obtain polynomial kernels for the case when F contains graph θc\theta_c as a minor for a fixed integer cc. The graph θc\theta_c consists of two vertices connected by cc parallel edges. Even though this may appear to be a very restricted class of problems it already encompasses well-studied problems such as {\sc Vertex Cover}, {\sc Feedback Vertex Set} and Diamond Hitting Set. The generic kernelization algorithm is based on a non-trivial application of protrusion techniques, previously used only for problems on topological graph classes

    Parameterized Distributed Algorithms

    Get PDF
    In this work, we initiate a thorough study of graph optimization problems parameterized by the output size in the distributed setting. In such a problem, an algorithm decides whether a solution of size bounded by k exists and if so, it finds one. We study fundamental problems, including Minimum Vertex Cover (MVC), Maximum Independent Set (MaxIS), Maximum Matching (MaxM), and many others, in both the LOCAL and CONGEST distributed computation models. We present lower bounds for the round complexity of solving parameterized problems in both models, together with optimal and near-optimal upper bounds. Our results extend beyond the scope of parameterized problems. We show that any LOCAL (1+epsilon)-approximation algorithm for the above problems must take Omega(epsilon^{-1}) rounds. Joined with the (epsilon^{-1}log n)^{O(1)} rounds algorithm of [Ghaffari et al., 2017] and the Omega (sqrt{(log n)/(log log n)}) lower bound of [Fabian Kuhn et al., 2016], the lower bounds match the upper bound up to polynomial factors in both parameters. We also show that our parameterized approach reduces the runtime of exact and approximate CONGEST algorithms for MVC and MaxM if the optimal solution is small, without knowing its size beforehand. Finally, we propose the first o(n^2) rounds CONGEST algorithms that approximate MVC within a factor strictly smaller than 2

    Improved Parameterized Algorithms for Constraint Satisfaction

    Full text link
    For many constraint satisfaction problems, the algorithm which chooses a random assignment achieves the best possible approximation ratio. For instance, a simple random assignment for {\sc Max-E3-Sat} allows 7/8-approximation and for every \eps >0 there is no polynomial-time (7/8+\eps)-approximation unless P=NP. Another example is the {\sc Permutation CSP} of bounded arity. Given the expected fraction ρ\rho of the constraints satisfied by a random assignment (i.e. permutation), there is no (\rho+\eps)-approximation algorithm for every \eps >0, assuming the Unique Games Conjecture (UGC). In this work, we consider the following parameterization of constraint satisfaction problems. Given a set of mm constraints of constant arity, can we satisfy at least ρm+k\rho m +k constraint, where ρ\rho is the expected fraction of constraints satisfied by a random assignment? {\sc Constraint Satisfaction Problems above Average} have been posed in different forms in the literature \cite{Niedermeier2006,MahajanRamanSikdar09}. We present a faster parameterized algorithm for deciding whether m/2+k/2m/2+k/2 equations can be simultaneously satisfied over F2{\mathbb F}_2. As a consequence, we obtain O(k)O(k)-variable bikernels for {\sc boolean CSPs} of arity cc for every fixed cc, and for {\sc permutation CSPs} of arity 3. This implies linear bikernels for many problems under the "above average" parameterization, such as {\sc Max-cc-Sat}, {\sc Set-Splitting}, {\sc Betweenness} and {\sc Max Acyclic Subgraph}. As a result, all the parameterized problems we consider in this paper admit 2O(k)2^{O(k)}-time algorithms. We also obtain non-trivial hybrid algorithms for every Max cc-CSP: for every instance II, we can either approximate II beyond the random assignment threshold in polynomial time, or we can find an optimal solution to II in subexponential time.Comment: A preliminary version of this paper has been accepted for IPEC 201

    A structural approach to kernels for ILPs: Treewidth and Total Unimodularity

    Get PDF
    Kernelization is a theoretical formalization of efficient preprocessing for NP-hard problems. Empirically, preprocessing is highly successful in practice, for example in state-of-the-art ILP-solvers like CPLEX. Motivated by this, previous work studied the existence of kernelizations for ILP related problems, e.g., for testing feasibility of Ax <= b. In contrast to the observed success of CPLEX, however, the results were largely negative. Intuitively, practical instances have far more useful structure than the worst-case instances used to prove these lower bounds. In the present paper, we study the effect that subsystems with (Gaifman graph of) bounded treewidth or totally unimodularity have on the kernelizability of the ILP feasibility problem. We show that, on the positive side, if these subsystems have a small number of variables on which they interact with the remaining instance, then we can efficiently replace them by smaller subsystems of size polynomial in the domain without changing feasibility. Thus, if large parts of an instance consist of such subsystems, then this yields a substantial size reduction. We complement this by proving that relaxations to the considered structures, e.g., larger boundaries of the subsystems, allow worst-case lower bounds against kernelization. Thus, these relaxed structures can be used to build instance families that cannot be efficiently reduced, by any approach.Comment: Extended abstract in the Proceedings of the 23rd European Symposium on Algorithms (ESA 2015

    A Survey on Approximation in Parameterized Complexity: Hardness and Algorithms

    Get PDF
    Parameterization and approximation are two popular ways of coping with NP-hard problems. More recently, the two have also been combined to derive many interesting results. We survey developments in the area both from the algorithmic and hardness perspectives, with emphasis on new techniques and potential future research directions

    Polynomial Kernels for {\lambda}-extendible Properties Parameterized Above the Poljak-Turz\'ik Bound

    Full text link
    Poljak and Turzik (Discrete Mathematics 1986) introduced the notion of {\lambda}-extendible properties of graphs as a generalization of the property of being bipartite. They showed that for any 0 < {\lambda} < 1 and {\lambda}-extendible property {\Pi}, any connected graph G on n vertices and m edges contains a spanning subgraph H in {\Pi} with at least {\lambda}m + (1-{\lambda})(n-1)/2 edges. The property of being bipartite is {\lambda}-extendible for {\lambda} = 1/2, and so the Poljak-Turzik bound generalizes the well-known Edwards-Erdos bound for Max-Cut. Other examples of {\lambda}-extendible properties include: being an acyclic oriented graph, a balanced signed graph, or a q-colorable graph for some integer q. Mnich et. al. (FSTTCS 2012) defined the closely related notion of strong {\lambda}-extendibility. They showed that the problem of finding a subgraph satisfying a given strongly {\lambda}-extendible property {\Pi} is fixed-parameter tractable (FPT) when parameterized above the Poljak-Turzik bound - does there exist a spanning subgraph H of a connected graph G such that H in {\Pi} and H has at least {\lambda}m + (1-{\lambda})(n-1)/2 + k edges? - subject to the condition that the problem is FPT on a certain simple class of graphs called almost-forests of cliques. In this paper we settle the kernelization complexity of nearly all problems parameterized above Poljak-Turzik bounds, in the affirmative. We show that these problems admit quadratic kernels (cubic when {\lambda} = 1/2), without using the assumption that the problem is FPT on almost-forests of cliques. Thus our results not only remove the technical condition of being FPT on almost-forests of cliques from previous results, but also unify and extend previously known kernelization results in this direction. Our results add to the select list of generic kernelization results known in the literature

    Sparsification Upper and Lower Bounds for Graphs Problems and Not-All-Equal SAT

    Get PDF
    We present several sparsification lower and upper bounds for classic problems in graph theory and logic. For the problems 4-Coloring, (Directed) Hamiltonian Cycle, and (Connected) Dominating Set, we prove that there is no polynomial-time algorithm that reduces any n-vertex input to an equivalent instance, of an arbitrary problem, with bitsize O(n^{2-epsilon}) for epsilon &gt; 0, unless NP is a subset of coNP/poly and the polynomial-time hierarchy collapses. These results imply that existing linear-vertex kernels for k-Nonblocker and k-Max Leaf Spanning Tree (the parametric duals of (Connected) Dominating Set) cannot be improved to have O(k^{2-epsilon}) edges, unless NP is a subset of NP/poly. We also present a positive result and exhibit a non-trivial sparsification algorithm for d-Not-All-Equal-SAT. We give an algorithm that reduces an n-variable input with clauses of size at most d to an equivalent input with O(n^{d-1}) clauses, for any fixed d. Our algorithm is based on a linear-algebraic proof of Lovász that bounds the number of hyperedges in critically 3-chromatic d-uniform n-vertex hypergraphs by binom{n}{d-1}. We show that our kernel is tight under the assumption that NP is not a subset of NP/poly
    corecore