17 research outputs found

    An Atypical Survey of Typical-Case Heuristic Algorithms

    Full text link
    Heuristic approaches often do so well that they seem to pretty much always give the right answer. How close can heuristic algorithms get to always giving the right answer, without inducing seismic complexity-theoretic consequences? This article first discusses how a series of results by Berman, Buhrman, Hartmanis, Homer, Longpr\'{e}, Ogiwara, Sch\"{o}ening, and Watanabe, from the early 1970s through the early 1990s, explicitly or implicitly limited how well heuristic algorithms can do on NP-hard problems. In particular, many desirable levels of heuristic success cannot be obtained unless severe, highly unlikely complexity class collapses occur. Second, we survey work initiated by Goldreich and Wigderson, who showed how under plausible assumptions deterministic heuristics for randomized computation can achieve a very high frequency of correctness. Finally, we consider formal ways in which theory can help explain the effectiveness of heuristics that solve NP-hard problems in practice.Comment: This article is currently scheduled to appear in the December 2012 issue of SIGACT New

    On Hardness Assumptions Needed for "Extreme High-End" PRGs and Fast Derandomization

    Get PDF
    The hardness vs. randomness paradigm aims to explicitly construct pseudorandom generators G:{0,1}^r ? {0,1}^m that fool circuits of size m, assuming the existence of explicit hard functions. A "high-end PRG" with seed length r = O(log m) (implying BPP=P) was achieved in a seminal work of Impagliazzo and Wigderson (STOC 1997), assuming the high-end hardness assumption: there exist constants 0 < ? < 1 < B, and functions computable in time 2^{B ? n} that cannot be computed by circuits of size 2^{? ? n}. Recently, motivated by fast derandomization of randomized algorithms, Doron et al. (FOCS 2020) and Chen and Tell (STOC 2021), construct "extreme high-end PRGs" with seed length r = (1+o(1))? log m, under qualitatively stronger assumptions. We study whether extreme high-end PRGs can be constructed from the corresponding hardness assumption in which ? = 1-o(1) and B = 1+o(1), which we call the extreme high-end hardness assumption. We give a partial negative answer: - The construction of Doron et al. composes a PEG (pseudo-entropy generator) with an extractor. The PEG is constructed starting from a function that is hard for MA-type circuits. We show that black-box PEG constructions from the extreme high-end hardness assumption must have large seed length (and so cannot be used to obtain extreme high-end PRGs by applying an extractor). To prove this, we establish a new property of (general) black-box PRG constructions from hard functions: it is possible to fix many output bits of the construction while fixing few bits of the hard function. This property distinguishes PRG constructions from typical extractor constructions, and this may explain why it is difficult to design PRG constructions. - The construction of Chen and Tell composes two PRGs: G?:{0,1}^{(1+o(1)) ? log m} ? {0,1}^{r? = m^{?(1)}} and G?:{0,1}^{r?} ? {0,1}^m. The first PRG is constructed from the extreme high-end hardness assumption, and the second PRG needs to run in time m^{1+o(1)}, and is constructed assuming one way functions. We show that in black-box proofs of hardness amplification to 1/2+1/m, reductions must make ?(m) queries, even in the extreme high-end. Known PRG constructions from hard functions are black-box and use (or imply) hardness amplification, and so cannot be used to construct a PRG G? from the extreme high-end hardness assumption. The new feature of our hardness amplification result is that it applies even to the extreme high-end setting of parameters, whereas past work does not. Our techniques also improve recent lower bounds of Ron-Zewi, Shaltiel and Varma (ITCS 2021) on the number of queries of local list-decoding algorithms

    Quantified Derandomization of Linear Threshold Circuits

    Full text link
    One of the prominent current challenges in complexity theory is the attempt to prove lower bounds for TC0TC^0, the class of constant-depth, polynomial-size circuits with majority gates. Relying on the results of Williams (2013), an appealing approach to prove such lower bounds is to construct a non-trivial derandomization algorithm for TC0TC^0. In this work we take a first step towards the latter goal, by proving the first positive results regarding the derandomization of TC0TC^0 circuits of depth d>2d>2. Our first main result is a quantified derandomization algorithm for TC0TC^0 circuits with a super-linear number of wires. Specifically, we construct an algorithm that gets as input a TC0TC^0 circuit CC over nn input bits with depth dd and n1+exp(d)n^{1+\exp(-d)} wires, runs in almost-polynomial-time, and distinguishes between the case that CC rejects at most 2n11/5d2^{n^{1-1/5d}} inputs and the case that CC accepts at most 2n11/5d2^{n^{1-1/5d}} inputs. In fact, our algorithm works even when the circuit CC is a linear threshold circuit, rather than just a TC0TC^0 circuit (i.e., CC is a circuit with linear threshold gates, which are stronger than majority gates). Our second main result is that even a modest improvement of our quantified derandomization algorithm would yield a non-trivial algorithm for standard derandomization of all of TC0TC^0, and would consequently imply that NEXP⊈TC0NEXP\not\subseteq TC^0. Specifically, if there exists a quantified derandomization algorithm that gets as input a TC0TC^0 circuit with depth dd and n1+O(1/d)n^{1+O(1/d)} wires (rather than n1+exp(d)n^{1+\exp(-d)} wires), runs in time at most 2nexp(d)2^{n^{\exp(-d)}}, and distinguishes between the case that CC rejects at most 2n11/5d2^{n^{1-1/5d}} inputs and the case that CC accepts at most 2n11/5d2^{n^{1-1/5d}} inputs, then there exists an algorithm with running time 2n1Ω(1)2^{n^{1-\Omega(1)}} for standard derandomization of TC0TC^0.Comment: Changes in this revision: An additional result (a PRG for quantified derandomization of depth-2 LTF circuits); rewrite of some of the exposition; minor correction

    Improved Bounds for Quantified Derandomization of Constant-Depth Circuits and Polynomials

    Get PDF
    This work studies the question of quantified derandomization, which was introduced by Goldreich and Wigderson (STOC 2014). The generic quantified derandomization problem is the following: For a circuit class cal{C} and a parameter B=B(n), given a circuit C in cal{C} with n input bits, decide whether C rejects all of its inputs, or accepts all but B(n) of its inputs. In the current work we consider three settings for this question. In each setting, we bring closer the parameter setting for which we can unconditionally construct relatively fast quantified derandomization algorithms, and the "threshold" values (for the parameters) for which any quantified derandomization algorithm implies a similar algorithm for standard derandomization. For constant-depth circuits, we construct an algorithm for quantified derandomization that works for a parameter B(n) that is only slightly smaller than a "threshold" parameter, and is significantly faster than the best currently-known algorithms for standard derandomization. On the way to this result we establish a new derandomization of the switching lemma, which significantly improves on previous results when the width of the formula is small. For constant-depth circuits with parity gates, we lower a "threshold" of Goldreich and Wigderson from depth five to depth four, and construct algorithms for quantified derandomization of a remaining type of layered depth-3 circuit that they left as an open problem. We also consider the question of constructing hitting-set generators for multivariate polynomials over large fields that vanish rarely, and prove two lower bounds on the seed length of such generators. Several of our proofs rely on an interesting technique, which we call the randomized tests technique. Intuitively, a standard technique to deterministically find a "good" object is to construct a simple deterministic test that decides the set of good objects, and then "fool" that test using a pseudorandom generator. We show that a similar approach works also if the simple deterministic test is replaced with a distribution over simple tests, and demonstrate the benefits in using a distribution instead of a single test

    Deterministic search for CNF satisfying assignments in almost polynomial time

    Full text link
    We consider the fundamental derandomization problem of deterministically finding a satisfying assignment to a CNF formula that has many satisfying assignments. We give a deterministic algorithm which, given an nn-variable poly(n)\mathrm{poly}(n)-clause CNF formula FF that has at least ε2n\varepsilon 2^n satisfying assignments, runs in time nO~(loglogn)2 n^{\tilde{O}(\log\log n)^2} for ε1/polylog(n)\varepsilon \ge 1/\mathrm{polylog}(n) and outputs a satisfying assignment of FF. Prior to our work the fastest known algorithm for this problem was simply to enumerate over all seeds of a pseudorandom generator for CNFs; using the best known PRGs for CNFs [DETT10], this takes time nΩ~(logn)n^{\tilde{\Omega}(\log n)} even for constant ε\varepsilon. Our approach is based on a new general framework relating deterministic search and deterministic approximate counting, which we believe may find further applications

    On Hitting-Set Generators for Polynomials That Vanish Rarely

    Get PDF
    The problem of constructing hitting-set generators for polynomials of low degree is fundamental in complexity theory and has numerous well-known applications. We study the following question, which is a relaxation of this problem: Is it easier to construct a hitting-set generator for polynomials p: ?? ? ? of degree d if we are guaranteed that the polynomial vanishes on at most an ? > 0 fraction of its inputs? We will specifically be interested in tiny values of ?? d/|?|. This question was first considered by Goldreich and Wigderson (STOC 2014), who studied a specific setting geared for a particular application, and another specific setting was later studied by the third author (CCC 2017). In this work our main interest is a systematic study of the relaxed problem, in its general form, and we prove results that significantly improve and extend the two previously-known results. Our contributions are of two types: - Over fields of size 2 ? |?| ? poly(n), we show that the seed length of any hitting-set generator for polynomials of degree d ? n^{.49} that vanish on at most ? = |?|^{-t} of their inputs is at least ?((d/t)?log(n)). - Over ??, we show that there exists a (non-explicit) hitting-set generator for polynomials of degree d ? n^{.99} that vanish on at most ? = |?|^{-t} of their inputs with seed length O((d-t)?log(n)). We also show a polynomial-time computable hitting-set generator with seed length O((d-t)?(2^{d-t}+log(n))). In addition, we prove that the problem we study is closely related to the following question: "Does there exist a small set S ? ?? whose degree-d closure is very large?", where the degree-d closure of S is the variety induced by the set of degree-d polynomials that vanish on S

    Applications of Derandomization Theory in Coding

    Get PDF
    Randomized techniques play a fundamental role in theoretical computer science and discrete mathematics, in particular for the design of efficient algorithms and construction of combinatorial objects. The basic goal in derandomization theory is to eliminate or reduce the need for randomness in such randomized constructions. In this thesis, we explore some applications of the fundamental notions in derandomization theory to problems outside the core of theoretical computer science, and in particular, certain problems related to coding theory. First, we consider the wiretap channel problem which involves a communication system in which an intruder can eavesdrop a limited portion of the transmissions, and construct efficient and information-theoretically optimal communication protocols for this model. Then we consider the combinatorial group testing problem. In this classical problem, one aims to determine a set of defective items within a large population by asking a number of queries, where each query reveals whether a defective item is present within a specified group of items. We use randomness condensers to explicitly construct optimal, or nearly optimal, group testing schemes for a setting where the query outcomes can be highly unreliable, as well as the threshold model where a query returns positive if the number of defectives pass a certain threshold. Finally, we design ensembles of error-correcting codes that achieve the information-theoretic capacity of a large class of communication channels, and then use the obtained ensembles for construction of explicit capacity achieving codes. [This is a shortened version of the actual abstract in the thesis.]Comment: EPFL Phd Thesi

    The Exact Complexity of Pseudorandom Functions and Tight Barriers to Lower Bound Proofs

    Get PDF
    How much computational resource do we need for cryptography? This is an important question of both theoretical and practical interests. In this paper, we study the problem on pseudorandom functions (PRFs) in the context of circuit complexity. Perhaps surprisingly, we prove extremely tight upper and lower bounds in various circuit models. * In general B2B_2 circuits, assuming the existence of PRFs, PRFs can be constructed in 2n+o(n)2n + o(n) size, simplifying and improving the O(n)O(n) bound by Ishai et al. (STOC 2008). We show that such construction is almost optimal by giving an unconditional 2nO(1)2n-O(1) lower bound. * In logarithmic depth circuits, assuming the existence of NC1NC^1 PRFs, PRFs can be constructed in 2n+o(n)2n + o(n) size and (1+ϵ)logn(1+\epsilon) \log n depth simultaneously. * In constant depth linear threshold circuits, assuming the existence of TC0TC^0 PRFs, PRFs can be constructed with wire complexity n1+O(1.61d)n^{1+O(1.61^{-d})}. We also give an n1+Ω(cd)n^{1+\Omega(c^{-d})} wire complexity lower bound for some constant cc. The upper bounds are proved with generalized Levin\u27s trick and novel constructions of almost universal hash functions; the lower bound for general circuits is proved via a tricky but elementary wire-counting argument; and the lower bound for TC0TC^0 circuits is proved by extracting a black-box property of TC0TC^0 circuits from the white-box restriction lemma of Chen, Santhanam, and Srinivasan (Theory Comput. 2018). As a byproduct, we prove unconditional tight upper and lower bounds for almost universal hashing, which we believe to have independent interests. Following Natural Proofs by Razborov and Rudich (J. Comput. Syst. Sci. 1997), our results make progress in realizing the difficulty to improve known circuit lower bounds which recently becomes significant due to the discovery of several bootstrapping results . In TC0TC^0, this reveals the limitation of the current restriction-based methods; in particular, it brings new insights in understanding the strange phenomenon of sharp threshold results such as the one presented by Chen and Tell (STOC 2019)

    On Randomness Extraction in AC0

    Get PDF
    We consider randomness extraction by AC0 circuits. The main parameter, n, is the length of the source, and all other parameters are functions of it. The additional extraction parameters are the min-entropy bound k=k(n), the seed length r=r(n), the output length m=m(n), and the (output) deviation bound epsilon=epsilon(n). For k = r+1) is possible if and only if k * r > n/poly(log(n)). For k >= n/log^(O(1))(n), we show that AC0-extraction of r+Omega(r) bits is possible when r=O(log(n)), but leave open the question of whether more bits can be extracted in this case. The impossibility result is for constant epsilon, and the possibility result supports epsilon=1/poly(n). The impossibility result is for (possibly) non-uniform AC0, whereas the possibility result hold for uniform AC0. All our impossibility results hold even for the model of bit-fixing sources, where k coincides with the number of non-fixed (i.e., random) bits. We also consider deterministic AC0 extraction from various classes of restricted sources. In particular, for any constant delta>0delta>0, we give explicit AC0 extractors for poly(1/delta) independent sources that are each of min-entropy rate delta; and four sources suffice for delta=0.99. Also, we give non-explicit AC0 extractors for bit-fixing sources of entropy rate 1/poly(log(n)) (i.e., having n/poly(log(n)) unfixed bits). This shows that the known analysis of the "restriction method" (for making a circuit constant by fixing as few variables as possible) is tight for AC0 even if the restriction is picked deterministically depending on the circuit
    corecore