1,169 research outputs found

    Hardness Amplification of Optimization Problems

    Get PDF
    In this paper, we prove a general hardness amplification scheme for optimization problems based on the technique of direct products. We say that an optimization problem ? is direct product feasible if it is possible to efficiently aggregate any k instances of ? and form one large instance of ? such that given an optimal feasible solution to the larger instance, we can efficiently find optimal feasible solutions to all the k smaller instances. Given a direct product feasible optimization problem ?, our hardness amplification theorem may be informally stated as follows: If there is a distribution D over instances of ? of size n such that every randomized algorithm running in time t(n) fails to solve ? on 1/?(n) fraction of inputs sampled from D, then, assuming some relationships on ?(n) and t(n), there is a distribution D\u27 over instances of ? of size O(n??(n)) such that every randomized algorithm running in time t(n)/poly(?(n)) fails to solve ? on 99/100 fraction of inputs sampled from D\u27. As a consequence of the above theorem, we show hardness amplification of problems in various classes such as NP-hard problems like Max-Clique, Knapsack, and Max-SAT, problems in P such as Longest Common Subsequence, Edit Distance, Matrix Multiplication, and even problems in TFNP such as Factoring and computing Nash equilibrium

    Average-Case Complexity

    Full text link
    We survey the average-case complexity of problems in NP. We discuss various notions of good-on-average algorithms, and present completeness results due to Impagliazzo and Levin. Such completeness results establish the fact that if a certain specific (but somewhat artificial) NP problem is easy-on-average with respect to the uniform distribution, then all problems in NP are easy-on-average with respect to all samplable distributions. Applying the theory to natural distributional problems remain an outstanding open question. We review some natural distributional problems whose average-case complexity is of particular interest and that do not yet fit into this theory. A major open question whether the existence of hard-on-average problems in NP can be based on the P≠\neqNP assumption or on related worst-case assumptions. We review negative results showing that certain proof techniques cannot prove such a result. While the relation between worst-case and average-case complexity for general NP problems remains open, there has been progress in understanding the relation between different ``degrees'' of average-case complexity. We discuss some of these ``hardness amplification'' results

    AM with Multiple Merlins

    Get PDF
    We introduce and study a new model of interactive proofs: AM(k), or Arthur-Merlin with k non-communicating Merlins. Unlike with the better-known MIP, here the assumption is that each Merlin receives an independent random challenge from Arthur. One motivation for this model (which we explore in detail) comes from the close analogies between it and the quantum complexity class QMA(k), but the AM(k) model is also natural in its own right. We illustrate the power of multiple Merlins by giving an AM(2) protocol for 3SAT, in which the Merlins' challenges and responses consist of only n^{1/2+o(1)} bits each. Our protocol has the consequence that, assuming the Exponential Time Hypothesis (ETH), any algorithm for approximating a dense CSP with a polynomial-size alphabet must take n^{(log n)^{1-o(1)}} time. Algorithms nearly matching this lower bound are known, but their running times had never been previously explained. Brandao and Harrow have also recently used our 3SAT protocol to show quasipolynomial hardness for approximating the values of certain entangled games. In the other direction, we give a simple quasipolynomial-time approximation algorithm for free games, and use it to prove that, assuming the ETH, our 3SAT protocol is essentially optimal. More generally, we show that multiple Merlins never provide more than a polynomial advantage over one: that is, AM(k)=AM for all k=poly(n). The key to this result is a subsampling theorem for free games, which follows from powerful results by Alon et al. and Barak et al. on subsampling dense CSPs, and which says that the value of any free game can be closely approximated by the value of a logarithmic-sized random subgame.Comment: 48 page

    NP-hardness of circuit minimization for multi-output functions

    Get PDF
    Can we design efficient algorithms for finding fast algorithms? This question is captured by various circuit minimization problems, and algorithms for the corresponding tasks have significant practical applications. Following the work of Cook and Levin in the early 1970s, a central question is whether minimizing the circuit size of an explicitly given function is NP-complete. While this is known to hold in restricted models such as DNFs, making progress with respect to more expressive classes of circuits has been elusive. In this work, we establish the first NP-hardness result for circuit minimization of total functions in the setting of general (unrestricted) Boolean circuits. More precisely, we show that computing the minimum circuit size of a given multi-output Boolean function f : {0,1}^n ? {0,1}^m is NP-hard under many-one polynomial-time randomized reductions. Our argument builds on a simpler NP-hardness proof for the circuit minimization problem for (single-output) Boolean functions under an extended set of generators. Complementing these results, we investigate the computational hardness of minimizing communication. We establish that several variants of this problem are NP-hard under deterministic reductions. In particular, unless ? = ??, no polynomial-time computable function can approximate the deterministic two-party communication complexity of a partial Boolean function up to a polynomial. This has consequences for the class of structural results that one might hope to show about the communication complexity of partial functions

    Independent Set, Induced Matching, and Pricing: Connections and Tight (Subexponential Time) Approximation Hardnesses

    Full text link
    We present a series of almost settled inapproximability results for three fundamental problems. The first in our series is the subexponential-time inapproximability of the maximum independent set problem, a question studied in the area of parameterized complexity. The second is the hardness of approximating the maximum induced matching problem on bounded-degree bipartite graphs. The last in our series is the tight hardness of approximating the k-hypergraph pricing problem, a fundamental problem arising from the area of algorithmic game theory. In particular, assuming the Exponential Time Hypothesis, our two main results are: - For any r larger than some constant, any r-approximation algorithm for the maximum independent set problem must run in at least 2^{n^{1-\epsilon}/r^{1+\epsilon}} time. This nearly matches the upper bound of 2^{n/r} (Cygan et al., 2008). It also improves some hardness results in the domain of parameterized complexity (e.g., Escoffier et al., 2012 and Chitnis et al., 2013) - For any k larger than some constant, there is no polynomial time min (k^{1-\epsilon}, n^{1/2-\epsilon})-approximation algorithm for the k-hypergraph pricing problem, where n is the number of vertices in an input graph. This almost matches the upper bound of min (O(k), \tilde O(\sqrt{n})) (by Balcan and Blum, 2007 and an algorithm in this paper). We note an interesting fact that, in contrast to n^{1/2-\epsilon} hardness for polynomial-time algorithms, the k-hypergraph pricing problem admits n^{\delta} approximation for any \delta >0 in quasi-polynomial time. This puts this problem in a rare approximability class in which approximability thresholds can be improved significantly by allowing algorithms to run in quasi-polynomial time.Comment: The full version of FOCS 201

    Conspiracies between learning algorithms, circuit lower bounds, and pseudorandomness

    Get PDF
    We prove several results giving new and stronger connections between learning theory, circuit complexity and pseudorandomness. Let C be any typical class of Boolean circuits, and C[s(n)] denote n-variable C-circuits of size ≤ s(n). We show: Learning Speedups. If C[poly(n)] admits a randomized weak learning algorithm under the uniform distribution with membership queries that runs in time 2n/nω(1), then for every k ≥ 1 and ε > 0 the class C[n k ] can be learned to high accuracy in time O(2n ε ). There is ε > 0 such that C[2n ε ] can be learned in time 2n/nω(1) if and only if C[poly(n)] can be learned in time 2(log n) O(1) . Equivalences between Learning Models. We use learning speedups to obtain equivalences between various randomized learning and compression models, including sub-exponential time learning with membership queries, sub-exponential time learning with membership and equivalence queries, probabilistic function compression and probabilistic average-case function compression. A Dichotomy between Learnability and Pseudorandomness. In the non-uniform setting, there is non-trivial learning for C[poly(n)] if and only if there are no exponentially secure pseudorandom functions computable in C[poly(n)]. Lower Bounds from Nontrivial Learning. If for each k ≥ 1, (depth-d)-C[n k ] admits a randomized weak learning algorithm with membership queries under the uniform distribution that runs in time 2n/nω(1), then for each k ≥ 1, BPE * (depth-d)-C[n k ]. If for some ε > 0 there are P-natural proofs useful against C[2n ε ], then ZPEXP * C[poly(n)]. Karp-Lipton Theorems for Probabilistic Classes. If there is a k > 0 such that BPE ⊆ i.o.Circuit[n k ], then BPEXP ⊆ i.o.EXP/O(log n). If ZPEXP ⊆ i.o.Circuit[2n/3 ], then ZPEXP ⊆ i.o.ESUBEXP. Hardness Results for MCSP. All functions in non-uniform NC1 reduce to the Minimum Circuit Size Problem via truth-table reductions computable by TC0 circuits. In particular, if MCSP ∈ TC0 then NC1 = TC0
    • …
    corecore