361 research outputs found

    An Atypical Survey of Typical-Case Heuristic Algorithms

    Full text link
    Heuristic approaches often do so well that they seem to pretty much always give the right answer. How close can heuristic algorithms get to always giving the right answer, without inducing seismic complexity-theoretic consequences? This article first discusses how a series of results by Berman, Buhrman, Hartmanis, Homer, Longpr\'{e}, Ogiwara, Sch\"{o}ening, and Watanabe, from the early 1970s through the early 1990s, explicitly or implicitly limited how well heuristic algorithms can do on NP-hard problems. In particular, many desirable levels of heuristic success cannot be obtained unless severe, highly unlikely complexity class collapses occur. Second, we survey work initiated by Goldreich and Wigderson, who showed how under plausible assumptions deterministic heuristics for randomized computation can achieve a very high frequency of correctness. Finally, we consider formal ways in which theory can help explain the effectiveness of heuristics that solve NP-hard problems in practice.Comment: This article is currently scheduled to appear in the December 2012 issue of SIGACT New

    Pseudorandomness for Approximate Counting and Sampling

    Get PDF
    We study computational procedures that use both randomness and nondeterminism. The goal of this paper is to derandomize such procedures under the weakest possible assumptions. Our main technical contribution allows one to “boost” a given hardness assumption: We show that if there is a problem in EXP that cannot be computed by poly-size nondeterministic circuits then there is one which cannot be computed by poly-size circuits that make non-adaptive NP oracle queries. This in particular shows that the various assumptions used over the last few years by several authors to derandomize Arthur-Merlin games (i.e., show AM = NP) are in fact all equivalent. We also define two new primitives that we regard as the natural pseudorandom objects associated with approximate counting and sampling of NP-witnesses. We use the “boosting” theorem and hashing techniques to construct these primitives using an assumption that is no stronger than that used to derandomize AM. We observe that Cai's proof that S_2^P ⊆ PP⊆(NP) and the learning algorithm of Bshouty et al. can be seen as reductions to sampling that are not probabilistic. As a consequence they can be derandomized under an assumption which is weaker than the assumption that was previously known to suffice

    New Lower Bounds and Derandomization for ACC, and a Derandomization-Centric View on the Algorithmic Method

    Get PDF
    In this paper, we obtain several new results on lower bounds and derandomization for ACC? circuits (constant-depth circuits consisting of AND/OR/MOD_m gates for a fixed constant m, a frontier class in circuit complexity): 1) We prove that any polynomial-time Merlin-Arthur proof system with an ACC? verifier (denoted by MA_{ACC?}) can be simulated by a nondeterministic proof system with quasi-polynomial running time and polynomial proof length, on infinitely many input lengths. This improves the previous simulation by [Chen, Lyu, and Williams, FOCS 2020], which requires both quasi-polynomial running time and proof length. 2) We show that MA_{ACC?} cannot be computed by fixed-polynomial-size ACC? circuits, and our hard languages are hard on a sufficiently dense set of input lengths. 3) We show that NEXP (nondeterministic exponential-time) does not have ACC? circuits of sub-half-exponential size, improving the previous sub-third-exponential size lower bound for NEXP against ACC? by [Williams, J. ACM 2014]. Combining our first and second results gives a conceptually simpler and derandomization-centric proof of the recent breakthrough result NQP := NTIME[2^polylog(n)] ? ? ACC? by [Murray and Williams, SICOMP 2020]: Instead of going through an easy witness lemma as they did, we first prove an ACC? lower bound for a subclass of MA, and then derandomize that subclass into NQP, while retaining its hardness against ACC?. Moreover, since our derandomization of MA_{ACC?} achieves a polynomial proof length, we indeed prove that nondeterministic quasi-polynomial-time with n^?(1) nondeterminism bits (denoted as NTIMEGUESS[2^polylog(n), n^?(1)]) has no poly(n)-size ACC? circuits, giving a new proof of a result by Vyas. Combining with a win-win argument based on randomized encodings from [Chen and Ren, STOC 2020], we also prove that NTIMEGUESS[2^polylog(n), n^?(1)] cannot be 1/2+1/poly(n)-approximated by poly(n)-size ACC? circuits, improving the recent strongly average-case lower bounds for NQP against ACC? by [Chen and Ren, STOC 2020]. One interesting technical ingredient behind our second result is the construction of a PSPACE-complete language that is paddable, downward self-reducible, same-length checkable, and weakly error correctable. Moreover, all its reducibility properties have corresponding AC?[2] non-adaptive oracle circuits. Our construction builds and improves upon similar constructions from [Trevisan and Vadhan, Complexity 2007] and [Chen, FOCS 2019], which all require at least TC? oracle circuits for implementing these properties

    Succinct Hitting Sets and Barriers to Proving Lower Bounds for Algebraic Circuits

    Get PDF
    We formalize a framework of algebraically natural lower bounds for algebraic circuits. Just as with the natural proofs notion of Razborov and Rudich (1997) for Boolean circuit lower bounds, our notion of algebraically natural lower bounds captures nearly all lower bound techniques known. However, unlike in the Boolean setting, there has been no concrete evidence demonstrating that this is a barrier to obtaining super-polynomial lower bounds for general algebraic circuits, as there is little understanding whether algebraic circuits are expressive enough to support “cryptography” secure against algebraic circuits. Following a similar result of Williams (2016) in the Boolean setting, we show that the existence of an algebraic natural proofs barrier is equivalent to the existence of succinct derandomization of the polynomial identity testing problem, that is, to the existence of a hitting set for the class of poly(N)-degree poly(N)-size circuits which consists of coefficient vectors of polynomials of polylog(N) degree with polylog(N)-size circuits. Further, we give an explicit universal construction showing that if such a succinct hitting set exists, then our universal construction suffices. Further, we assess the existing literature constructing hitting sets for restricted classes of algebraic circuits and observe that none of them are succinct as given. Yet, we show how to modify some of these constructions to obtain succinct hitting sets. This constitutes the first evidence supporting the existence of an algebraic natural proofs barrier. Our framework is similar to the Geometric Complexity Theory (GCT) program of Mulmuley and Sohoni (2001), except that here we emphasize constructiveness of the proofs while the GCT program emphasizes symmetry. Nevertheless, our succinct hitting sets have relevance to the GCT program as they imply lower bounds for the complexity of the defining equations of polynomials computed by small circuits. A conference version of this paper appeared in the Proceedings of the 49th Annual ACM Symposium on Theory of Computing (STOC 2017)

    Instance-Wise Hardness Versus Randomness Tradeoffs for Arthur-Merlin Protocols

    Get PDF
    corecore