24 research outputs found

    Faster space-efficient algorithms for Subset Sum, k -Sum, and related problems

    Get PDF
    We present randomized algorithms that solve subset sum and knapsack instances with n items in Oāˆ— (20.86n) time, where the Oāˆ— (āˆ™ ) notation suppresses factors polynomial in the input size, and polynomial space, assuming random read-only access to exponentially many random bits. These results can be extended to solve binary integer programming on n variables with few constraints in a similar running time. We also show that for any constant k ā‰„ 2, random instances of k-Sum can be solved using O(nk -0.5polylog(n)) time and O(log n) space, without the assumption of random access to random bits.Underlying these results is an algorithm that determines whether two given lists of length n with integers bounded by a polynomial in n share a common value. Assuming random read-only access to random bits, we show that this problem can be solved using O(log n) space significantly faster than the trivial O(n2) time algorithm if no value occurs too often in the same list.</p

    New Tools and Connections for Exponential-Time Approximation

    Get PDF
    In this paper, we develop new tools and connections for exponential time approximation. In this setting, we are given a problem instance and an integer r>1, and the goal is to design an approximation algorithm with the fastest possible running time. We give randomized algorithms that establish an approximation ratio of 1. r for maximum independent set in Oāˆ—(exp(O~(n/rlog2r+rlog2r))) time, 2. r for chromatic number in Oāˆ—(exp(O~(n/rlogr+rlog2r))) time, 3. (2āˆ’1/r) for minimum vertex cover in Oāˆ—(exp(n/rĪ©(r))) time, and 4. (kāˆ’1/r) for minimum k-hypergraph vertex cover in Oāˆ—(exp(n/(kr)Ī©(kr))) time. (Throughout, O~ and Oāˆ— omit polyloglog(r) and factors polynomial in the input size, respectively.) The best known time bounds for all problems were Oāˆ—(2n/r) (Bourgeois et al. i

    New Tools and Connections for Exponential-Time Approximation

    Get PDF
    In this paper, we develop new tools and connections for exponential time approximation. In this setting, we are given a problem instance and an integer r>1, and the goal is to design an approximation algorithm with the fastest possible running time. We give randomized algorithms that establish an approximation ratio of 1. r for maximum independent set in Oāˆ—(exp(O~(n/rlog2r+rlog2r))) time, 2. r for chromatic number in Oāˆ—(exp(O~(n/rlogr+rlog2r))) time, 3. (2āˆ’1/r) for minimum vertex cover in Oāˆ—(exp(n/rĪ©(r))) time, and 4. (kāˆ’1/r) for minimum k-hypergraph vertex cover in Oāˆ—(exp(n/(kr)Ī©(kr))) time. (Throughout, O~ and Oāˆ— omit polyloglog(r) and factors polynomial in the input size, respectively.) The best known time bounds for all problems were Oāˆ—(2n/r) (Bourgeois et al. in Discret Appl Math 159(17):1954ā€“1970, 2011; Cygan et al. in Exponential-time approximation of hard problems, 2008). For maximum independent set and chromatic number, these bounds were complemented by exp(n1āˆ’o(1)/r1+o(1)) lower bounds (under the Exponential Time Hypothesis (ETH)) (Chalermsook et al. in Foundations of computer science, FOCS, pp. 370ā€“379, 2013; Laekhanukit in Inapproximability of combinatorial problems in subexponential-time. Ph.D. thesis, 2014). Our results show that the naturally-looking Oāˆ—(2n/r) bounds are not tight for all these problems. The key to these results is a sparsification procedure that reduces a problem to a bounded-degree variant, allowing the use of approximation algorithms for bounded-degree graphs. To obtain the first two results, we introduce a new randomized branching rule. Finally, we show a connection between PCP parameters and exponential-time approximation algorithms. This connection together with our independent set algorithm refute the possibility to overly reduce the size of Chanā€™s PCP (Chan in J. ACM 63(3):27:1ā€“27:32, 2016). It also implies that a (significant) improvement over our result will refute the gap-ETH conjecture (Dinur in Electron Colloq Comput Complex (ECCC) 23:128, 2016; Manurangsi and Raghavendra in A birthday repetition theorem and complexity of approximating dense CSPs, 2016)

    Parameterized Complexity of Streaming Diameter and Connectivity Problems

    Get PDF
    We initiate the investigation of the parameterized complexity of Diameter and Connectivity in the streaming paradigm. On the positive end, we show that knowing a vertex cover of size k allows for algorithms in the Adjacency List (AL) streaming model whose number of passes is constant and memory is O(log n) for any fixed k. Underlying these algorithms is a method to execute a breadth-first search in O(k) passes and O(klog n) bits of memory. On the negative end, we show that many other parameters lead to lower bounds in the AL model, where Ī©(n/p) bits of memory is needed for any p-pass algorithm even for constant parameter values. In particular, this holds for graphs with a known modulator (deletion set) of constant size to a graph that has no induced subgraph isomorphic to a fixed graph H, for most H. For some cases, we can also show one-pass, Ī©(nlog n) bits of memory lower bounds. We also prove a much stronger Ī©(n2/p) lower bound for Diameter on bipartite graphs. Finally, using the insights we developed into streaming parameterized graph exploration algorithms, we show a new streaming kernelization algorithm for computing a vertex cover of size k. This yields a kernel of 2k vertices (with O(k2) edges) produced as a stream in poly(k) passes and only O(klog n) bits of memory

    A short note on Merlin-Arthur protocols for subset sum

    No full text
    Given n positive integers we show how to construct a proof that the number of subsets summing to a particular integer t equals a claimed quantity. The proof is of size View the MathML sourceOāŽ(t), can be constructed in OāŽ(t)OāŽ(t) time and can be probabilistically verified in time View the MathML sourceOāŽ(t) with at most 1/2 one-sided error probability. Here OāŽ(ā‹…)OāŽ(ā‹…) omits factors polynomial in the input size

    Computing the chromatic number using graph decompositions via matrix rank

    No full text
    \u3cp\u3eComputing the smallest number q such that the vertices of a given graph can be properly q-colored, known as the chromatic number, is one of the oldest and most fundamental problems in combinatorial optimization. The q-COLORING problem has been studied intensively using the framework of parameterized algorithmics, resulting in a very good understanding of the best-possible algorithms for several parameterizations based on the structure of the graph. For example, algorithms are known to solve the problem on graphs of treewidth tw in time O\u3csup\u3eāŽ\u3c/sup\u3e(q\u3csup\u3etw\u3c/sup\u3e), while a running time of O\u3csup\u3eāŽ\u3c/sup\u3e((qāˆ’Īµ)\u3csup\u3etw\u3c/sup\u3e) is impossible assuming the Strong Exponential Time Hypothesis (SETH). While there is an abundance of work for parameterizations based on decompositions of the graph by vertex separators, almost nothing is known about parameterizations based on edge separators. We fill this gap by studying q-COLORING parameterized by cutwidth, and parameterized by pathwidth in bounded-degree graphs. Our research uncovers interesting new ways to exploit small edge separators. We present two algorithms for q-COLORING parameterized by cutwidth ctw: a deterministic one that runs in time O\u3csup\u3eāŽ\u3c/sup\u3e(2\u3csup\u3eĻ‰ā‹…ctw\u3c/sup\u3e), where Ļ‰ is the square matrix multiplication exponent, and a randomized one with runtime O\u3csup\u3eāŽ\u3c/sup\u3e(2\u3csup\u3ectw\u3c/sup\u3e). In sharp contrast to earlier work, the running time is independent of q. The dependence on cutwidth is optimal: we prove that even 3-COLORING cannot be solved in O\u3csup\u3eāŽ\u3c/sup\u3e((2āˆ’Īµ)\u3csup\u3ectw\u3c/sup\u3e) time assuming SETH. Our algorithms rely on a new rank bound for a matrix that describes compatible colorings. Combined with a simple communication protocol for evaluating a product of two polynomials, this also yields an O\u3csup\u3eāŽ\u3c/sup\u3e((āŒŠd/2āŒ‹+1)\u3csup\u3epw\u3c/sup\u3e) time randomized algorithm for q-COLORING on graphs of pathwidth pw and maximum degree d. Such a runtime was first obtained by Bjƶrklund, but only for graphs with few proper colorings. We also prove that this result is optimal in the sense that no O\u3csup\u3eāŽ\u3c/sup\u3e((āŒŠd/2āŒ‹+1āˆ’Īµ)\u3csup\u3epw\u3c/sup\u3e)-time algorithm exists assuming SETH.\u3c/p\u3

    Hamiltonicity below Dirac's condition

    No full text
    Dirac's theorem (1952) is a classical result of graph theory, stating that an n-vertex graph (nā‰„3) is Hamiltonian if every vertex has degree at least n/2. Both the value n/2 and the requirement for every vertex to have high degree are necessary for the theorem to hold.\u3cbr/\u3eIn this work we give efficient algorithms for determining Hamiltonicity when either of the two conditions are relaxed. More precisely, we show that the Hamiltonian cycle problem can be solved in time ckā‹…nO(1), for some fixed constant c, if at least nāˆ’k vertices have degree at least n/2, or if all vertices have degree at least n/2āˆ’k. The running time is, in both cases, asymptotically optimal, under the exponential-time hypothesis (ETH).\u3cbr/\u3eThe results extend the range of tractability of the Hamiltonian cycle problem, showing that it is fixed-parameter tractable when parameterized below a natural bound. In addition, for the first parameterization we show that a kernel with O(k) vertices can be found in polynomial time

    Hamiltonicity below Diracā€™s condition

    No full text
    \u3cp class= para style= margin: 0cm 0cm 0.0001pt; background-image: initial; background-position: initial; background-size: initial; background-repeat: initial; background-attachment: initial; background-origin: initial; background-clip: initial; \u3eDiracā€™s theorem (1952) is a classical result of graph theory, stating that anĀ \u3cem style= box-sizing: border-box \u3en\u3c/em\u3e-vertex graph (nā‰„3nā‰„3) is Hamiltonian if every vertex has degree at leastĀ \u3cem style= box-sizing: border-box \u3en\u3c/em\u3e/2. Both the valueĀ \u3cem style= box-sizing: border-box \u3en\u3c/em\u3e/2 and the requirement forĀ \u3cem style= box-sizing: border-box \u3eevery vertex\u3c/em\u3eĀ to have high degree are necessary for the theorem to hold.\u3c/p\u3e\u3cp\u3e \u3c/p\u3e\u3cp class= para style= margin: 0cm 0cm 0.0001pt; background-image: initial; background-position: initial; background-size: initial; background-repeat: initial; background-attachment: initial; background-origin: initial; background-clip: initial; box-sizing: border-box; overflow-wrap: break-word; word-break: break-word; \u3eIn this work we give efficient algorithms for determining Hamiltonicity when either of the two conditions are relaxed. More precisely, we show that theĀ Hamiltonian CycleĀ problem can be solved in timeĀ ckā‹…nO(1)ckā‹…nO(1), for a fixed constantĀ \u3cem style= box-sizing: border-box \u3ec\u3c/em\u3e, if at leastĀ nāˆ’knāˆ’kĀ vertices have degree at leastĀ \u3cem style= box-sizing: border-box \u3en\u3c/em\u3e/2, or if all vertices have degree at leastĀ n/2āˆ’kn/2āˆ’k. The running time is, in both cases, asymptotically optimal, under the exponential-time hypothesis (ETH).\u3c/p\u3e\u3cp/\u3e\u3cp class= para style= margin: 0cm 0cm 0.0001pt; background-image: initial; background-position: initial; background-size: initial; background-repeat: initial; background-attachment: initial; background-origin: initial; background-clip: initial; box-sizing: border-box; overflow-wrap: break-word; word-break: break-word; \u3e \u3c/p\u3e\u3cp class= para style= margin-top:12.0pt;margin-right:0cm;margin-bottom:14.4pt; margin-left:0cm;background:white;box-sizing: border-box;overflow-wrap: break-word; word-break:break-word;font-variant-ligatures: normal;font-variant-caps: normal; orphans: 2;text-align:start;widows: 2;-webkit-text-stroke-width: 0px; text-decoration-style: initial;text-decoration-color: initial;word-spacing: 0px \u3eThe results extend the range of tractability of theĀ Hamiltonian CycleĀ problem, showing that it is fixed-parameter tractable when parameterized below a natural bound. In addition, for the first parameterization we show that a kernel withĀ \u3cem style= box-sizing: border-box \u3eO\u3c/em\u3e(\u3cem style= box-sizing: border-box \u3ek\u3c/em\u3e) vertices can be found in polynomial time.\u3c/p\u3

    More consequences of falsifying SETH and the orthogonal vectors conjecture

    No full text
    \u3cp\u3eThe Strong Exponential Time Hypothesis and the OV-conjecture are two popular hardness assumptions used to prove a plethora of lower bounds, especially in the realm of polynomial-time algorithms. The OV-conjecture in moderate dimension States there is no &gt; 0 for which an O(N\u3csup\u3e2āˆ’Īµ\u3c/sup\u3e) poly(D) time algorithm can decide whether there is a pair of orthogonal vectors in a given set of size N that contains D-dimensional binary vectors. We strengthen the evidence for these hardness assumptions. In particular, we show that if the OV-conjecture fails, then two problems for which we are far from obtaining even tiny improvements over exhaustive search would have surprisingly fast algorithms. If the OV conjecture is false, then there is a fixed &gt; 0 such that: (1) For all d and all large enough k, there is a randomized algorithm that takes O(n(1\u3csup\u3eāˆ’Īµ\u3c/sup\u3e)\u3csup\u3ek\u3c/sup\u3e) time to solve the Zero-Weight-k-Clique and Min-Weight-k-Clique problems on d-hypergraphs with n vertices. As a consequence, the OV-conjecture is implied by the Weighted Clique conjecture. (2) For all c, the satisfiability of sparse TC\u3csup\u3e1\u3c/sup\u3e circuits on n inputs (that is, circuits with cn wires, depth c log n, and negation, AND, OR, and threshold gates) can be computed in time O((2 āˆ’)\u3csup\u3en\u3c/sup\u3e).\u3c/p\u3

    More consequences of falsifying SETH and the orthogonal vectors conjecture

    No full text
    The Strong Exponential Time Hypothesis and the OV-conjecture are two popular hardness assumptions used to prove a plethora of lower bounds, especially in the realm of polynomial-time algorithms. The OV-conjecture in moderate dimension states there is no \epsilon>0 for which an O(N2āˆ’Ļµ)poly(D)O(N^{2-\epsilon})\mathrm{poly}(D) time algorithm can decide whether there is a pair of orthogonal vectors in a given set of size NN that contains DD-dimensional binary vectors. We strengthen the evidence for these hardness assumptions. In particular, we show that if the OV-conjecture fails, then two problems for which we are far from obtaining even tiny improvements over exhaustive search would have surprisingly fast algorithms. If the OV conjecture is false, then there is a fixed \epsilon>0 such that: (1) For all dd and all large enough kk, there is a randomized algorithm that takes O(n(1āˆ’Ļµ)k)O(n^{(1-\epsilon)k}) time to solve the Zero-Weight-kk-Clique and Min-Weight-kk-Clique problems on dd-hypergraphs with nn vertices. As a consequence, the OV-conjecture is implied by the Weighted Clique conjecture. (2) For all cc, the satisfiability of sparse TC1 circuits on nn inputs (that is, circuits with cncn wires, depth clogā”nc\log n, and negation, AND, OR, and threshold gates) can be computed in time O((2āˆ’Ļµ)n){O((2-\epsilon)^n)}
    corecore