184 research outputs found

    Independent Set, Induced Matching, and Pricing: Connections and Tight (Subexponential Time) Approximation Hardnesses

    Full text link
    We present a series of almost settled inapproximability results for three fundamental problems. The first in our series is the subexponential-time inapproximability of the maximum independent set problem, a question studied in the area of parameterized complexity. The second is the hardness of approximating the maximum induced matching problem on bounded-degree bipartite graphs. The last in our series is the tight hardness of approximating the k-hypergraph pricing problem, a fundamental problem arising from the area of algorithmic game theory. In particular, assuming the Exponential Time Hypothesis, our two main results are: - For any r larger than some constant, any r-approximation algorithm for the maximum independent set problem must run in at least 2^{n^{1-\epsilon}/r^{1+\epsilon}} time. This nearly matches the upper bound of 2^{n/r} (Cygan et al., 2008). It also improves some hardness results in the domain of parameterized complexity (e.g., Escoffier et al., 2012 and Chitnis et al., 2013) - For any k larger than some constant, there is no polynomial time min (k^{1-\epsilon}, n^{1/2-\epsilon})-approximation algorithm for the k-hypergraph pricing problem, where n is the number of vertices in an input graph. This almost matches the upper bound of min (O(k), \tilde O(\sqrt{n})) (by Balcan and Blum, 2007 and an algorithm in this paper). We note an interesting fact that, in contrast to n^{1/2-\epsilon} hardness for polynomial-time algorithms, the k-hypergraph pricing problem admits n^{\delta} approximation for any \delta >0 in quasi-polynomial time. This puts this problem in a rare approximability class in which approximability thresholds can be improved significantly by allowing algorithms to run in quasi-polynomial time.Comment: The full version of FOCS 201

    Exponential Time Complexity of the Permanent and the Tutte Polynomial

    Get PDF
    We show conditional lower bounds for well-studied #P-hard problems: (a) The number of satisfying assignments of a 2-CNF formula with n variables cannot be counted in time exp(o(n)), and the same is true for computing the number of all independent sets in an n-vertex graph. (b) The permanent of an n x n matrix with entries 0 and 1 cannot be computed in time exp(o(n)). (c) The Tutte polynomial of an n-vertex multigraph cannot be computed in time exp(o(n)) at most evaluation points (x,y) in the case of multigraphs, and it cannot be computed in time exp(o(n/polylog n)) in the case of simple graphs. Our lower bounds are relative to (variants of) the Exponential Time Hypothesis (ETH), which says that the satisfiability of n-variable 3-CNF formulas cannot be decided in time exp(o(n)). We relax this hypothesis by introducing its counting version #ETH, namely that the satisfying assignments cannot be counted in time exp(o(n)). In order to use #ETH for our lower bounds, we transfer the sparsification lemma for d-CNF formulas to the counting setting

    On the Closest Vector Problem with a Distance Guarantee

    Get PDF
    We present a substantially more efficient variant, both in terms of running time and size of preprocessing advice, of the algorithm by Liu, Lyubashevsky, and Micciancio for solving CVPP (the preprocessing version of the Closest Vector Problem, CVP) with a distance guarantee. For instance, for any α<1/2\alpha < 1/2, our algorithm finds the (unique) closest lattice point for any target point whose distance from the lattice is at most α\alpha times the length of the shortest nonzero lattice vector, requires as preprocessing advice only NO~(nexp(α2n/(12α)2))N \approx \widetilde{O}(n \exp(\alpha^2 n /(1-2\alpha)^2)) vectors, and runs in time O~(nN)\widetilde{O}(nN). As our second main contribution, we present reductions showing that it suffices to solve CVP, both in its plain and preprocessing versions, when the input target point is within some bounded distance of the lattice. The reductions are based on ideas due to Kannan and a recent sparsification technique due to Dadush and Kun. Combining our reductions with the LLM algorithm gives an approximation factor of O(n/logn)O(n/\sqrt{\log n}) for search CVPP, improving on the previous best of O(n1.5)O(n^{1.5}) due to Lagarias, Lenstra, and Schnorr. When combined with our improved algorithm we obtain, somewhat surprisingly, that only O(n) vectors of preprocessing advice are sufficient to solve CVPP with (the only slightly worse) approximation factor of O(n).Comment: An early version of the paper was titled "On Bounded Distance Decoding and the Closest Vector Problem with Preprocessing". Conference on Computational Complexity (2014

    New Tools and Connections for Exponential-Time Approximation

    Get PDF
    In this paper, we develop new tools and connections for exponential time approximation. In this setting, we are given a problem instance and an integer r>1, and the goal is to design an approximation algorithm with the fastest possible running time. We give randomized algorithms that establish an approximation ratio of 1. r for maximum independent set in O∗(exp(O~(n/rlog2r+rlog2r))) time, 2. r for chromatic number in O∗(exp(O~(n/rlogr+rlog2r))) time, 3. (2−1/r) for minimum vertex cover in O∗(exp(n/rΩ(r))) time, and 4. (k−1/r) for minimum k-hypergraph vertex cover in O∗(exp(n/(kr)Ω(kr))) time. (Throughout, O~ and O∗ omit polyloglog(r) and factors polynomial in the input size, respectively.) The best known time bounds for all problems were O∗(2n/r) (Bourgeois et al. in Discret Appl Math 159(17):1954–1970, 2011; Cygan et al. in Exponential-time approximation of hard problems, 2008). For maximum independent set and chromatic number, these bounds were complemented by exp(n1−o(1)/r1+o(1)) lower bounds (under the Exponential Time Hypothesis (ETH)) (Chalermsook et al. in Foundations of computer science, FOCS, pp. 370–379, 2013; Laekhanukit in Inapproximability of combinatorial problems in subexponential-time. Ph.D. thesis, 2014). Our results show that the naturally-looking O∗(2n/r) bounds are not tight for all these problems. The key to these results is a sparsification procedure that reduces a problem to a bounded-degree variant, allowing the use of approximation algorithms for bounded-degree graphs. To obtain the first two results, we introduce a new randomized branching rule. Finally, we show a connection between PCP parameters and exponential-time approximation algorithms. This connection together with our independent set algorithm refute the possibility to overly reduce the size of Chan’s PCP (Chan in J. ACM 63(3):27:1–27:32, 2016). It also implies that a (significant) improvement over our result will refute the gap-ETH conjecture (Dinur in Electron Colloq Comput Complex (ECCC) 23:128, 2016; Manurangsi and Raghavendra in A birthday repetition theorem and complexity of approximating dense CSPs, 2016)

    New Tools and Connections for Exponential-Time Approximation

    Get PDF
    In this paper, we develop new tools and connections for exponential time approximation. In this setting, we are given a problem instance and an integer r>1, and the goal is to design an approximation algorithm with the fastest possible running time. We give randomized algorithms that establish an approximation ratio of 1. r for maximum independent set in O∗(exp(O~(n/rlog2r+rlog2r))) time, 2. r for chromatic number in O∗(exp(O~(n/rlogr+rlog2r))) time, 3. (2−1/r) for minimum vertex cover in O∗(exp(n/rΩ(r))) time, and 4. (k−1/r) for minimum k-hypergraph vertex cover in O∗(exp(n/(kr)Ω(kr))) time. (Throughout, O~ and O∗ omit polyloglog(r) and factors polynomial in the input size, respectively.) The best known time bounds for all problems were O∗(2n/r) (Bourgeois et al. i

    Tail bounds for all eigenvalues of a sum of random matrices

    Get PDF
    This work introduces the minimax Laplace transform method, a modification of the cumulant-based matrix Laplace transform method developed in "User-friendly tail bounds for sums of random matrices" (arXiv:1004.4389v6) that yields both upper and lower bounds on each eigenvalue of a sum of random self-adjoint matrices. This machinery is used to derive eigenvalue analogues of the classical Chernoff, Bennett, and Bernstein bounds. Two examples demonstrate the efficacy of the minimax Laplace transform. The first concerns the effects of column sparsification on the spectrum of a matrix with orthonormal rows. Here, the behavior of the singular values can be described in terms of coherence-like quantities. The second example addresses the question of relative accuracy in the estimation of eigenvalues of the covariance matrix of a random process. Standard results on the convergence of sample covariance matrices provide bounds on the number of samples needed to obtain relative accuracy in the spectral norm, but these results only guarantee relative accuracy in the estimate of the maximum eigenvalue. The minimax Laplace transform argument establishes that if the lowest eigenvalues decay sufficiently fast, on the order of (K^2*r*log(p))/eps^2 samples, where K is the condition number of an optimal rank-r approximation to C, are sufficient to ensure that the dominant r eigenvalues of the covariance matrix of a N(0, C) random vector are estimated to within a factor of 1+-eps with high probability.Comment: 20 pages, 1 figure, see also arXiv:1004.4389v
    corecore