127 research outputs found

    On Polynomial Kernels for Integer Linear Programs: Covering, Packing and Feasibility

    Full text link
    We study the existence of polynomial kernels for the problem of deciding feasibility of integer linear programs (ILPs), and for finding good solutions for covering and packing ILPs. Our main results are as follows: First, we show that the ILP Feasibility problem admits no polynomial kernelization when parameterized by both the number of variables and the number of constraints, unless NP \subseteq coNP/poly. This extends to the restricted cases of bounded variable degree and bounded number of variables per constraint, and to covering and packing ILPs. Second, we give a polynomial kernelization for the Cover ILP problem, asking for a solution to Ax >= b with c^Tx <= k, parameterized by k, when A is row-sparse; this generalizes a known polynomial kernelization for the special case with 0/1-variables and coefficients (d-Hitting Set)

    Efficient Parameterized Algorithms for Computing All-Pairs Shortest Paths

    Get PDF
    Computing all-pairs shortest paths is a fundamental and much-studied problem with many applications. Unfortunately, despite intense study, there are still no significantly faster algorithms for it than the O(n3)\mathcal{O}(n^3) time algorithm due to Floyd and Warshall (1962). Somewhat faster algorithms exist for the vertex-weighted version if fast matrix multiplication may be used. Yuster (SODA 2009) gave an algorithm running in time O(n2.842)\mathcal{O}(n^{2.842}), but no combinatorial, truly subcubic algorithm is known. Motivated by the recent framework of efficient parameterized algorithms (or "FPT in P"), we investigate the influence of the graph parameters clique-width (cwcw) and modular-width (mwmw) on the running times of algorithms for solving All-Pairs Shortest Paths. We obtain efficient (and combinatorial) parameterized algorithms on non-negative vertex-weighted graphs of times O(cw2n2)\mathcal{O}(cw^2n^2), resp. O(mw2n+n2)\mathcal{O}(mw^2n + n^2). If fast matrix multiplication is allowed then the latter can be improved to O(mw1.842n+n2)\mathcal{O}(mw^{1.842}n + n^2) using the algorithm of Yuster as a black box. The algorithm relative to modular-width is adaptive, meaning that the running time matches the best unparameterized algorithm for parameter value mwmw equal to nn, and they outperform them already for mw∈O(n1−ε)mw \in \mathcal{O}(n^{1 - \varepsilon}) for any ε>0\varepsilon > 0

    Compression via Matroids: A Randomized Polynomial Kernel for Odd Cycle Transversal

    Full text link
    The Odd Cycle Transversal problem (OCT) asks whether a given graph can be made bipartite by deleting at most kk of its vertices. In a breakthrough result Reed, Smith, and Vetta (Operations Research Letters, 2004) gave a \BigOh(4^kkmn) time algorithm for it, the first algorithm with polynomial runtime of uniform degree for every fixed kk. It is known that this implies a polynomial-time compression algorithm that turns OCT instances into equivalent instances of size at most \BigOh(4^k), a so-called kernelization. Since then the existence of a polynomial kernel for OCT, i.e., a kernelization with size bounded polynomially in kk, has turned into one of the main open questions in the study of kernelization. This work provides the first (randomized) polynomial kernelization for OCT. We introduce a novel kernelization approach based on matroid theory, where we encode all relevant information about a problem instance into a matroid with a representation of size polynomial in kk. For OCT, the matroid is built to allow us to simulate the computation of the iterative compression step of the algorithm of Reed, Smith, and Vetta, applied (for only one round) to an approximate odd cycle transversal which it is aiming to shrink to size kk. The process is randomized with one-sided error exponentially small in kk, where the result can contain false positives but no false negatives, and the size guarantee is cubic in the size of the approximate solution. Combined with an \BigOh(\sqrt{\log n})-approximation (Agarwal et al., STOC 2005), we get a reduction of the instance to size \BigOh(k^{4.5}), implying a randomized polynomial kernelization.Comment: Minor changes to agree with SODA 2012 version of the pape

    Solving Connectivity Problems Parameterized by Treedepth in Single-Exponential Time and Polynomial Space

    Get PDF
    A breakthrough result of Cygan et al. (FOCS 2011) showed that connectivity problems parameterized by treewidth can be solved much faster than the previously best known time ?^*(2^{?(twlog tw)}). Using their inspired Cut&Count technique, they obtained ?^*(?^tw) time algorithms for many such problems. Moreover, they proved these running times to be optimal assuming the Strong Exponential-Time Hypothesis. Unfortunately, like other dynamic programming algorithms on tree decompositions, these algorithms also require exponential space, and this is widely believed to be unavoidable. In contrast, for the slightly larger parameter called treedepth, there are already several examples of matching the time bounds obtained for treewidth, but using only polynomial space. Nevertheless, this has remained open for connectivity problems. In the present work, we close this knowledge gap by applying the Cut&Count technique to graphs of small treedepth. While the general idea is unchanged, we have to design novel procedures for counting consistently cut solution candidates using only polynomial space. Concretely, we obtain time ?^*(3^d) and polynomial space for Connected Vertex Cover, Feedback Vertex Set, and Steiner Tree on graphs of treedepth d. Similarly, we obtain time ?^*(4^d) and polynomial space for Connected Dominating Set and Connected Odd Cycle Transversal

    Parameterized Complexity and Kernelizability of Max Ones and Exact Ones Problems

    Get PDF
    For a finite set Gamma of Boolean relations, MAX ONES SAT(Gamma) and EXACT ONES SAT(Gamma) are generalized satisfiability problems where every constraint relation is from Gamma, and the task is to find a satisfying assignment with at least/exactly k variables set to 1, respectively. We study the parameterized complexity of these problems, including the question whether they admit polynomial kernels. For MAX ONES SAT(Gamma), we give a classification into five different complexity levels: polynomial-time solvable, admits a polynomial kernel, fixed-parameter tractable, solvable in polynomial time for fixed k, and NP-hard already for k = 1. For EXACT ONES SAT(Gamma), we refine the classification obtained earlier by taking a closer look at the fixed-parameter tractable cases and classifying the sets Gamma for which EXACT ONES SAT(Gamma) admits a polynomial kernel

    Preprocessing under uncertainty

    Get PDF
    In this work we study preprocessing for tractable problems when part of the input is unknown or uncertain. This comes up naturally if, e.g., the load of some machines or the congestion of some roads is not known far enough in advance, or if we have to regularly solve a problem over instances that are largely similar, e.g., daily airport scheduling with few charter flights. Unlike robust optimization, which also studies settings like this, our goal lies not in computing solutions that are (approximately) good for every instantiation. Rather, we seek to preprocess the known parts of the input, to speed up finding an optimal solution once the missing data is known. We present efficient algorithms that given an instance with partially uncertain input generate an instance of size polynomial in the amount of uncertain data that is equivalent for every instantiation of the unknown part. Concretely, we obtain such algorithms for Minimum Spanning Tree, Minimum Weight Matroid Basis, and Maximum Cardinality Bipartite Maxing, where respectively the weight of edges, weight of elements, and the availability of vertices is unknown for part of the input. Furthermore, we show that there are tractable problems, such as Small Connected Vertex Cover, for which one cannot hope to obtain similar results.Comment: 18 page

    Polynomial Kernelizations for MIN F^+Pi_1 and MAX NP

    Get PDF
    The relation of constant-factor approximability to fixed-parameter tractability and kernelization is a long-standing open question. We prove that two large classes of constant-factor approximable problems, namely~textscMINF+Pi1textsc{MIN F}^+Pi_1 and~textscMAXNPtextsc{MAX NP}, including the well-known subclass~textscMAXSNPtextsc{MAX SNP}, admit polynomial kernelizations for their natural decision versions. This extends results of Cai and Chen (JCSS 1997), stating that the standard parameterizations of problems in~textscMAXSNPtextsc{MAX SNP} and~textscMINF+Pi1textsc{MIN F}^+Pi_1 are fixed-parameter tractable, and complements recent research on problems that do not admit polynomial kernelizations (Bodlaender et al. ICALP 2008)

    A Randomized Polynomial Kernelization for Vertex Cover with a Smaller Parameter

    Get PDF
    In the Vertex Cover problem we are given a graph G=(V,E)G=(V,E) and an integer kk and have to determine whether there is a set X⊆VX\subseteq V of size at most kk such that each edge in EE has at least one endpoint in XX. The problem can be easily solved in time O∗(2k)O^*(2^k), making it fixed-parameter tractable (FPT) with respect to kk. While the fastest known algorithm takes only time O∗(1.2738k)O^*(1.2738^k), much stronger improvements have been obtained by studying parameters that are smaller than kk. Apart from treewidth-related results, the arguably best algorithm for Vertex Cover runs in time O∗(2.3146p)O^*(2.3146^p), where p=k−LP(G)p=k-LP(G) is only the excess of the solution size kk over the best fractional vertex cover (Lokshtanov et al.\ TALG 2014). Since p≤kp\leq k but kk cannot be bounded in terms of pp alone, this strictly increases the range of tractable instances. Recently, Garg and Philip (SODA 2016) greatly contributed to understanding the parameterized complexity of the Vertex Cover problem. They prove that 2LP(G)−MM(G)2LP(G)-MM(G) is a lower bound for the vertex cover size of GG, where MM(G)MM(G) is the size of a largest matching of GG, and proceed to study parameter ℓ=k−(2LP(G)−MM(G))\ell=k-(2LP(G)-MM(G)). They give an algorithm of running time O∗(3ℓ)O^*(3^\ell), proving that Vertex Cover is FPT in ℓ\ell. It can be easily observed that ℓ≤p\ell\leq p whereas pp cannot be bounded in terms of ℓ\ell alone. We complement the work of Garg and Philip by proving that Vertex Cover admits a randomized polynomial kernelization in terms of ℓ\ell, i.e., an efficient preprocessing to size polynomial in ℓ\ell. This improves over parameter p=k−LP(G)p=k-LP(G) for which this was previously known (Kratsch and Wahlstr\"om FOCS 2012).Comment: Full version of ESA 2016 pape
    • …
    corecore