280 research outputs found

    Compression via Matroids: A Randomized Polynomial Kernel for Odd Cycle Transversal

    Full text link
    The Odd Cycle Transversal problem (OCT) asks whether a given graph can be made bipartite by deleting at most kk of its vertices. In a breakthrough result Reed, Smith, and Vetta (Operations Research Letters, 2004) gave a \BigOh(4^kkmn) time algorithm for it, the first algorithm with polynomial runtime of uniform degree for every fixed kk. It is known that this implies a polynomial-time compression algorithm that turns OCT instances into equivalent instances of size at most \BigOh(4^k), a so-called kernelization. Since then the existence of a polynomial kernel for OCT, i.e., a kernelization with size bounded polynomially in kk, has turned into one of the main open questions in the study of kernelization. This work provides the first (randomized) polynomial kernelization for OCT. We introduce a novel kernelization approach based on matroid theory, where we encode all relevant information about a problem instance into a matroid with a representation of size polynomial in kk. For OCT, the matroid is built to allow us to simulate the computation of the iterative compression step of the algorithm of Reed, Smith, and Vetta, applied (for only one round) to an approximate odd cycle transversal which it is aiming to shrink to size kk. The process is randomized with one-sided error exponentially small in kk, where the result can contain false positives but no false negatives, and the size guarantee is cubic in the size of the approximate solution. Combined with an \BigOh(\sqrt{\log n})-approximation (Agarwal et al., STOC 2005), we get a reduction of the instance to size \BigOh(k^{4.5}), implying a randomized polynomial kernelization.Comment: Minor changes to agree with SODA 2012 version of the pape

    Towards Work-Efficient Parallel Parameterized Algorithms

    Full text link
    Parallel parameterized complexity theory studies how fixed-parameter tractable (fpt) problems can be solved in parallel. Previous theoretical work focused on parallel algorithms that are very fast in principle, but did not take into account that when we only have a small number of processors (between 2 and, say, 1024), it is more important that the parallel algorithms are work-efficient. In the present paper we investigate how work-efficient fpt algorithms can be designed. We review standard methods from fpt theory, like kernelization, search trees, and interleaving, and prove trade-offs for them between work efficiency and runtime improvements. This results in a toolbox for developing work-efficient parallel fpt algorithms.Comment: Prior full version of the paper that will appear in Proceedings of the 13th International Conference and Workshops on Algorithms and Computation (WALCOM 2019), February 27 - March 02, 2019, Guwahati, India. The final authenticated version is available online at https://doi.org/10.1007/978-3-030-10564-8_2

    Data Reduction for Graph Coloring Problems

    Full text link
    This paper studies the kernelization complexity of graph coloring problems with respect to certain structural parameterizations of the input instances. We are interested in how well polynomial-time data reduction can provably shrink instances of coloring problems, in terms of the chosen parameter. It is well known that deciding 3-colorability is already NP-complete, hence parameterizing by the requested number of colors is not fruitful. Instead, we pick up on a research thread initiated by Cai (DAM, 2003) who studied coloring problems parameterized by the modification distance of the input graph to a graph class on which coloring is polynomial-time solvable; for example parameterizing by the number k of vertex-deletions needed to make the graph chordal. We obtain various upper and lower bounds for kernels of such parameterizations of q-Coloring, complementing Cai's study of the time complexity with respect to these parameters. Our results show that the existence of polynomial kernels for q-Coloring parameterized by the vertex-deletion distance to a graph class F is strongly related to the existence of a function f(q) which bounds the number of vertices which are needed to preserve the NO-answer to an instance of q-List-Coloring on F.Comment: Author-accepted manuscript of the article that will appear in the FCT 2011 special issue of Information & Computatio

    On the Approximate Compressibility of Connected Vertex Cover

    Get PDF
    The Connected Vertex Cover problem, where the goal is to compute a minimum set of vertices in a given graph which forms a vertex cover and induces a connected subgraph, is a fundamental combinatorial problem and has received extensive attention in various subdomains of algorithmics. In the area of kernelization, it is known that this problem is unlikely to have efficient preprocessing algorithms, also known as polynomial kernelizations. However, it has been shown in a recent work of Lokshtanov et al. [STOC 2017] that if one considered an appropriate notion of approximate kernelization, then this problem parameterized by the solution size does admit an approximate polynomial kernelization. In fact, Lokhtanov et al. were able to obtain a polynomial size approximate kernelization scheme (PSAKS) for Connected Vertex Cover parameterized by the solution size. A PSAKS is essentially a preprocessing algorithm whose error can be made arbitrarily close to 0. In this paper we revisit this problem, and consider parameters that are strictly smaller than the size of the solution and obtain the first polynomial size approximate kernelization schemes for the Connected Vertex Cover problem when parameterized by the deletion distance of the input graph to the class of cographs, the class of bounded treewidth graphs, and the class of all chordal graphs.Comment: 1 figure; Revisions from the previous version incorporated based on the comments from some anonymous reviewer

    On the Kernel and Related Problems in Interval Digraphs

    Get PDF
    Given a digraph GG, a set X⊆V(G)X\subseteq V(G) is said to be absorbing set (resp. dominating set) if every vertex in the graph is either in XX or is an in-neighbour (resp. out-neighbour) of a vertex in XX. A set S⊆V(G)S\subseteq V(G) is said to be an independent set if no two vertices in SS are adjacent in GG. A kernel (resp. solution) of GG is an independent and absorbing (resp. dominating) set in GG. We explore the algorithmic complexity of these problems in the well known class of interval digraphs. A digraph GG is an interval digraph if a pair of intervals (Su,Tu)(S_u,T_u) can be assigned to each vertex uu of GG such that (u,v)∈E(G)(u,v)\in E(G) if and only if Su∩Tv≠∅S_u\cap T_v\neq\emptyset. Many different subclasses of interval digraphs have been defined and studied in the literature by restricting the kinds of pairs of intervals that can be assigned to the vertices. We observe that several of these classes, like interval catch digraphs, interval nest digraphs, adjusted interval digraphs and chronological interval digraphs, are subclasses of the more general class of reflexive interval digraphs -- which arise when we require that the two intervals assigned to a vertex have to intersect. We show that all the problems mentioned above are efficiently solvable, in most of the cases even linear-time solvable, in the class of reflexive interval digraphs, but are APX-hard on even the very restricted class of interval digraphs called point-point digraphs, where the two intervals assigned to each vertex are required to be degenerate, i.e. they consist of a single point each. The results we obtain improve and generalize several existing algorithms and structural results for subclasses of reflexive interval digraphs.Comment: 26 pages, 3 figure

    The Parameterized Complexity of Degree Constrained Editing Problems

    Get PDF
    This thesis examines degree constrained editing problems within the framework of parameterized complexity. A degree constrained editing problem takes as input a graph and a set of constraints and asks whether the graph can be altered in at most k editing steps such that the degrees of the remaining vertices are within the given constraints. Parameterized complexity gives a framework for examining problems that are traditionally considered intractable and developing efficient exact algorithms for them, or showing that it is unlikely that they have such algorithms, by introducing an additional component to the input, the parameter, which gives additional information about the structure of the problem. If the problem has an algorithm that is exponential in the parameter, but polynomial, with constant degree, in the size of the input, then it is considered to be fixed-parameter tractable. Parameterized complexity also provides an intractability framework for identifying problems that are likely to not have such an algorithm. Degree constrained editing problems provide natural parameterizations in terms of the total cost k of vertex deletions, edge deletions and edge additions allowed, and the upper bound r on the degree of the vertices remaining after editing. We define a class of degree constrained editing problems, WDCE, which generalises several well know problems, such as Degree r Deletion, Cubic Subgraph, r-Regular Subgraph, f-Factor and General Factor. We show that in general if both k and r are part of the parameter, problems in the WDCE class are fixed-parameter tractable, and if parameterized by k or r alone, the problems are intractable in a parameterized sense. We further show cases of WDCE that have polynomial time kernelizations, and in particular when all the degree constraints are a single number and the editing operations include vertex deletion and edge deletion we show that there is a kernel with at most O(kr(k + r)) vertices. If we allow vertex deletion and edge addition, we show that despite remaining fixed-parameter tractable when parameterized by k and r together, the problems are unlikely to have polynomial sized kernelizations, or polynomial time kernelizations of a certain form, under certain complexity theoretic assumptions. We also examine a more general case where given an input graph the question is whether with at most k deletions the graph can be made r-degenerate. We show that in this case the problems are intractable, even when r is a constant

    Hierarchies of Inefficient Kernelizability

    Full text link
    The framework of Bodlaender et al. (ICALP 2008) and Fortnow and Santhanam (STOC 2008) allows us to exclude the existence of polynomial kernels for a range of problems under reasonable complexity-theoretical assumptions. However, there are also some issues that are not addressed by this framework, including the existence of Turing kernels such as the "kernelization" of Leaf Out Branching(k) into a disjunction over n instances of size poly(k). Observing that Turing kernels are preserved by polynomial parametric transformations, we define a kernelization hardness hierarchy, akin to the M- and W-hierarchy of ordinary parameterized complexity, by the PPT-closure of problems that seem likely to be fundamentally hard for efficient Turing kernelization. We find that several previously considered problems are complete for our fundamental hardness class, including Min Ones d-SAT(k), Binary NDTM Halting(k), Connected Vertex Cover(k), and Clique(k log n), the clique problem parameterized by k log n
    • 

    corecore