112 research outputs found

    Completeness Results for Parameterized Space Classes

    Full text link
    The parameterized complexity of a problem is considered "settled" once it has been shown to lie in FPT or to be complete for a class in the W-hierarchy or a similar parameterized hierarchy. Several natural parameterized problems have, however, resisted such a classification. At least in some cases, the reason is that upper and lower bounds for their parameterized space complexity have recently been obtained that rule out completeness results for parameterized time classes. In this paper, we make progress in this direction by proving that the associative generability problem and the longest common subsequence problem are complete for parameterized space classes. These classes are defined in terms of different forms of bounded nondeterminism and in terms of simultaneous time--space bounds. As a technical tool we introduce a "union operation" that translates between problems complete for classical complexity classes and for W-classes.Comment: IPEC 201

    On space efficiency of algorithms working on structural decompositions of graphs

    Get PDF
    Dynamic programming on path and tree decompositions of graphs is a technique that is ubiquitous in the field of parameterized and exponential-time algorithms. However, one of its drawbacks is that the space usage is exponential in the decomposition's width. Following the work of Allender et al. [Theory of Computing, '14], we investigate whether this space complexity explosion is unavoidable. Using the idea of reparameterization of Cai and Juedes [J. Comput. Syst. Sci., '03], we prove that the question is closely related to a conjecture that the Longest Common Subsequence problem parameterized by the number of input strings does not admit an algorithm that simultaneously uses XP time and FPT space. Moreover, we complete the complexity landscape sketched for pathwidth and treewidth by Allender et al. by considering the parameter tree-depth. We prove that computations on tree-depth decompositions correspond to a model of non-deterministic machines that work in polynomial time and logarithmic space, with access to an auxiliary stack of maximum height equal to the decomposition's depth. Together with the results of Allender et al., this describes a hierarchy of complexity classes for polynomial-time non-deterministic machines with different restrictions on the access to working space, which mirrors the classic relations between treewidth, pathwidth, and tree-depth.Comment: An extended abstract appeared in the proceedings of STACS'16. The new version is augmented with a space-efficient algorithm for Dominating Set using the Chinese remainder theore

    Lossy Kernelization

    Get PDF
    In this paper we propose a new framework for analyzing the performance of preprocessing algorithms. Our framework builds on the notion of kernelization from parameterized complexity. However, as opposed to the original notion of kernelization, our definitions combine well with approximation algorithms and heuristics. The key new definition is that of a polynomial size α\alpha-approximate kernel. Loosely speaking, a polynomial size α\alpha-approximate kernel is a polynomial time pre-processing algorithm that takes as input an instance (I,k)(I,k) to a parameterized problem, and outputs another instance (I,k)(I',k') to the same problem, such that I+kkO(1)|I'|+k' \leq k^{O(1)}. Additionally, for every c1c \geq 1, a cc-approximate solution ss' to the pre-processed instance (I,k)(I',k') can be turned in polynomial time into a (cα)(c \cdot \alpha)-approximate solution ss to the original instance (I,k)(I,k). Our main technical contribution are α\alpha-approximate kernels of polynomial size for three problems, namely Connected Vertex Cover, Disjoint Cycle Packing and Disjoint Factors. These problems are known not to admit any polynomial size kernels unless NPcoNP/polyNP \subseteq coNP/poly. Our approximate kernels simultaneously beat both the lower bounds on the (normal) kernel size, and the hardness of approximation lower bounds for all three problems. On the negative side we prove that Longest Path parameterized by the length of the path and Set Cover parameterized by the universe size do not admit even an α\alpha-approximate kernel of polynomial size, for any α1\alpha \geq 1, unless NPcoNP/polyNP \subseteq coNP/poly. In order to prove this lower bound we need to combine in a non-trivial way the techniques used for showing kernelization lower bounds with the methods for showing hardness of approximationComment: 58 pages. Version 2 contain new results: PSAKS for Cycle Packing and approximate kernel lower bounds for Set Cover and Hitting Set parameterized by universe siz

    Inapproximability of maximal strip recovery

    Get PDF
    In comparative genomic, the first step of sequence analysis is usually to decompose two or more genomes into syntenic blocks that are segments of homologous chromosomes. For the reliable recovery of syntenic blocks, noise and ambiguities in the genomic maps need to be removed first. Maximal Strip Recovery (MSR) is an optimization problem proposed by Zheng, Zhu, and Sankoff for reliably recovering syntenic blocks from genomic maps in the midst of noise and ambiguities. Given dd genomic maps as sequences of gene markers, the objective of \msr{d} is to find dd subsequences, one subsequence of each genomic map, such that the total length of syntenic blocks in these subsequences is maximized. For any constant d2d \ge 2, a polynomial-time 2d-approximation for \msr{d} was previously known. In this paper, we show that for any d2d \ge 2, \msr{d} is APX-hard, even for the most basic version of the problem in which all gene markers are distinct and appear in positive orientation in each genomic map. Moreover, we provide the first explicit lower bounds on approximating \msr{d} for all d2d \ge 2. In particular, we show that \msr{d} is NP-hard to approximate within Ω(d/logd)\Omega(d/\log d). From the other direction, we show that the previous 2d-approximation for \msr{d} can be optimized into a polynomial-time algorithm even if dd is not a constant but is part of the input. We then extend our inapproximability results to several related problems including \cmsr{d}, \gapmsr{\delta}{d}, and \gapcmsr{\delta}{d}.Comment: A preliminary version of this paper appeared in two parts in the Proceedings of the 20th International Symposium on Algorithms and Computation (ISAAC 2009) and the Proceedings of the 4th International Frontiers of Algorithmics Workshop (FAW 2010

    Algorithms for Low-Distortion Embeddings into Arbitrary 1-Dimensional Spaces

    Get PDF
    We study the problem of finding a minimum-distortion embedding of the shortest path metric of an unweighted graph into a "simpler" metric X. Computing such an embedding (exactly or approximately) is a non-trivial task even when X is the metric induced by a path, or, equivalently, the real line. In this paper we give approximation and fixed-parameter tractable (FPT) algorithms for minimum-distortion embeddings into the metric of a subdivision of some fixed graph H, or, equivalently, into any fixed 1-dimensional simplicial complex. More precisely, we study the following problem: For given graphs G, H and integer c, is it possible to embed G with distortion c into a graph homeomorphic to H? Then embedding into the line is the special case H=K_2, and embedding into the cycle is the case H=K_3, where K_k denotes the complete graph on k vertices. For this problem we give - an approximation algorithm, which in time f(H)* poly (n), for some function f, either correctly decides that there is no embedding of G with distortion c into any graph homeomorphic to H, or finds an embedding with distortion poly(c); - an exact algorithm, which in time f\u27(H, c)* poly (n), for some function f\u27, either correctly decides that there is no embedding of G with distortion c into any graph homeomorphic to H, or finds an embedding with distortion c. Prior to our work, poly(OPT)-approximation or FPT algorithms were known only for embedding into paths and trees of bounded degrees

    Hardness magnification for natural problems

    Get PDF
    We show that for several natural problems of interest, complexity lower bounds that are barely non-trivial imply super-polynomial or even exponential lower bounds in strong computational models. We term this phenomenon "hardness magnification". Our examples of hardness magnification include: 1. Let MCSP be the decision problem whose YES instances are truth tables of functions with circuit complexity at most s(n). We show that if MCSP[2^√n] cannot be solved on average with zero error by formulas of linear (or even sub-linear) size, then NP does not have polynomial-size formulas. In contrast, Hirahara and Santhanam (2017) recently showed that MCSP[2^√n] cannot be solved in the worst case by formulas of nearly quadratic size. 2. If there is a c > 0 such that for each positive integer d there is an ε > 0 such that the problem of checking if an n-vertex graph in the adjacency matrix representation has a vertex cover of size (log n)^c cannot be solved by depth-d AC^0 circuits of size m^1+ε, where m = Θ(n^2), then NP does not have polynomial-size formulas. 3. Let (α, β)-MCSP[s] be the promise problem whose YES instances are truth tables of functions that are α-approximable by a circuit of size s(n), and whose NO instances are truth tables of functions that are not β-approximable by a circuit of size s(n). We show that for arbitrary 1/2 ≺ β ≺ α ≤ 1, if (α, β)-MCSP[2^√n] cannot be solved by randomized algorithms with random access to the input running in sublinear time, then NP is not contained in BPP. 4. If for each probabilistic quasi-linear time machine M using poly-logarithmic many random bits that is claimed to solve Satisfiability, there is a deterministic polynomial-time machine that on infinitely many input lengths n either identifies a satisfiable instance of bit-length n on which M does not accept with high probability or an unsatisfiable instance of bit-length n on which M does not reject with high probability, then NEXP is not contained in BPP. 5. Given functions s, c N → N where s ≻ c, let MKtP[c, s] be the promise problem whose YES instances are strings of Kt complexity at most c(N) and NO instances are strings of Kt complexity greater than s(N). We show that if there is a δ ≻ 0 such that for each ε ≻ 0, MKtP[N^ε, N^ε + 5 log(N)] requires Boolean circuits of size N^1+δ, then EXP is not contained in SIZE (poly). For each of the cases of magnification above, we observe that standard hardness assumptions imply much stronger lower bounds for these problems than we require for magnification. We further explore magnification as an avenue to proving strong lower bounds, and argue that magnification circumvents the "natural proofs" barrier of Razborov and Rudich (1997). Examining some standard proof techniques, we find that they fall just short of proving lower bounds via magnification. As one of our main open problems, we ask whether there are other meta-mathematical barriers to proving lower bounds that rule out approache
    corecore