845 research outputs found

    On Polynomial Kernels for Integer Linear Programs: Covering, Packing and Feasibility

    Full text link
    We study the existence of polynomial kernels for the problem of deciding feasibility of integer linear programs (ILPs), and for finding good solutions for covering and packing ILPs. Our main results are as follows: First, we show that the ILP Feasibility problem admits no polynomial kernelization when parameterized by both the number of variables and the number of constraints, unless NP \subseteq coNP/poly. This extends to the restricted cases of bounded variable degree and bounded number of variables per constraint, and to covering and packing ILPs. Second, we give a polynomial kernelization for the Cover ILP problem, asking for a solution to Ax >= b with c^Tx <= k, parameterized by k, when A is row-sparse; this generalizes a known polynomial kernelization for the special case with 0/1-variables and coefficients (d-Hitting Set)

    Efficient Parameterized Algorithms for Computing All-Pairs Shortest Paths

    Get PDF
    Computing all-pairs shortest paths is a fundamental and much-studied problem with many applications. Unfortunately, despite intense study, there are still no significantly faster algorithms for it than the O(n3)\mathcal{O}(n^3) time algorithm due to Floyd and Warshall (1962). Somewhat faster algorithms exist for the vertex-weighted version if fast matrix multiplication may be used. Yuster (SODA 2009) gave an algorithm running in time O(n2.842)\mathcal{O}(n^{2.842}), but no combinatorial, truly subcubic algorithm is known. Motivated by the recent framework of efficient parameterized algorithms (or "FPT in P"), we investigate the influence of the graph parameters clique-width (cwcw) and modular-width (mwmw) on the running times of algorithms for solving All-Pairs Shortest Paths. We obtain efficient (and combinatorial) parameterized algorithms on non-negative vertex-weighted graphs of times O(cw2n2)\mathcal{O}(cw^2n^2), resp. O(mw2n+n2)\mathcal{O}(mw^2n + n^2). If fast matrix multiplication is allowed then the latter can be improved to O(mw1.842n+n2)\mathcal{O}(mw^{1.842}n + n^2) using the algorithm of Yuster as a black box. The algorithm relative to modular-width is adaptive, meaning that the running time matches the best unparameterized algorithm for parameter value mwmw equal to nn, and they outperform them already for mwO(n1ε)mw \in \mathcal{O}(n^{1 - \varepsilon}) for any ε>0\varepsilon > 0

    Compression via Matroids: A Randomized Polynomial Kernel for Odd Cycle Transversal

    Full text link
    The Odd Cycle Transversal problem (OCT) asks whether a given graph can be made bipartite by deleting at most kk of its vertices. In a breakthrough result Reed, Smith, and Vetta (Operations Research Letters, 2004) gave a \BigOh(4^kkmn) time algorithm for it, the first algorithm with polynomial runtime of uniform degree for every fixed kk. It is known that this implies a polynomial-time compression algorithm that turns OCT instances into equivalent instances of size at most \BigOh(4^k), a so-called kernelization. Since then the existence of a polynomial kernel for OCT, i.e., a kernelization with size bounded polynomially in kk, has turned into one of the main open questions in the study of kernelization. This work provides the first (randomized) polynomial kernelization for OCT. We introduce a novel kernelization approach based on matroid theory, where we encode all relevant information about a problem instance into a matroid with a representation of size polynomial in kk. For OCT, the matroid is built to allow us to simulate the computation of the iterative compression step of the algorithm of Reed, Smith, and Vetta, applied (for only one round) to an approximate odd cycle transversal which it is aiming to shrink to size kk. The process is randomized with one-sided error exponentially small in kk, where the result can contain false positives but no false negatives, and the size guarantee is cubic in the size of the approximate solution. Combined with an \BigOh(\sqrt{\log n})-approximation (Agarwal et al., STOC 2005), we get a reduction of the instance to size \BigOh(k^{4.5}), implying a randomized polynomial kernelization.Comment: Minor changes to agree with SODA 2012 version of the pape

    Solving Connectivity Problems Parameterized by Treedepth in Single-Exponential Time and Polynomial Space

    Get PDF
    A breakthrough result of Cygan et al. (FOCS 2011) showed that connectivity problems parameterized by treewidth can be solved much faster than the previously best known time ?^*(2^{?(twlog tw)}). Using their inspired Cut&Count technique, they obtained ?^*(?^tw) time algorithms for many such problems. Moreover, they proved these running times to be optimal assuming the Strong Exponential-Time Hypothesis. Unfortunately, like other dynamic programming algorithms on tree decompositions, these algorithms also require exponential space, and this is widely believed to be unavoidable. In contrast, for the slightly larger parameter called treedepth, there are already several examples of matching the time bounds obtained for treewidth, but using only polynomial space. Nevertheless, this has remained open for connectivity problems. In the present work, we close this knowledge gap by applying the Cut&Count technique to graphs of small treedepth. While the general idea is unchanged, we have to design novel procedures for counting consistently cut solution candidates using only polynomial space. Concretely, we obtain time ?^*(3^d) and polynomial space for Connected Vertex Cover, Feedback Vertex Set, and Steiner Tree on graphs of treedepth d. Similarly, we obtain time ?^*(4^d) and polynomial space for Connected Dominating Set and Connected Odd Cycle Transversal

    Space-Efficient Biconnected Components and Recognition of Outerplanar Graphs

    Get PDF
    We present space-efficient algorithms for computing cut vertices in a given graph with nn vertices and mm edges in linear time using O(n+min{m,nloglogn})O(n+\min\{m,n\log \log n\}) bits. With the same time and using O(n+m)O(n+m) bits, we can compute the biconnected components of a graph. We use this result to show an algorithm for the recognition of (maximal) outerplanar graphs in O(nloglogn)O(n\log \log n) time using O(n)O(n) bits

    Point Line Cover: The Easy Kernel is Essentially Tight

    Get PDF
    The input to the NP-hard Point Line Cover problem (PLC) consists of a set PP of nn points on the plane and a positive integer kk, and the question is whether there exists a set of at most kk lines which pass through all points in PP. A simple polynomial-time reduction reduces any input to one with at most k2k^2 points. We show that this is essentially tight under standard assumptions. More precisely, unless the polynomial hierarchy collapses to its third level, there is no polynomial-time algorithm that reduces every instance (P,k)(P,k) of PLC to an equivalent instance with O(k2ϵ)O(k^{2-\epsilon}) points, for any ϵ>0\epsilon>0. This answers, in the negative, an open problem posed by Lokshtanov (PhD Thesis, 2009). Our proof uses the machinery for deriving lower bounds on the size of kernels developed by Dell and van Melkebeek (STOC 2010). It has two main ingredients: We first show, by reduction from Vertex Cover, that PLC---conditionally---has no kernel of total size O(k2ϵ)O(k^{2-\epsilon}) bits. This does not directly imply the claimed lower bound on the number of points, since the best known polynomial-time encoding of a PLC instance with nn points requires ω(n2)\omega(n^{2}) bits. To get around this we build on work of Goodman et al. (STOC 1989) and devise an oracle communication protocol of cost O(nlogn)O(n\log n) for PLC; its main building block is a bound of O(nO(n))O(n^{O(n)}) for the order types of nn points that are not necessarily in general position, and an explicit algorithm that enumerates all possible order types of n points. This protocol and the lower bound on total size together yield the stated lower bound on the number of points. While a number of essentially tight polynomial lower bounds on total sizes of kernels are known, our result is---to the best of our knowledge---the first to show a nontrivial lower bound for structural/secondary parameters
    corecore