33 research outputs found

    Two-Source Condensers with Low Error and Small Entropy Gap via Entropy-Resilient Functions

    Get PDF
    In their seminal work, Chattopadhyay and Zuckerman (STOC\u2716) constructed a two-source extractor with error epsilon for n-bit sources having min-entropy {polylog}(n/epsilon). Unfortunately, the construction\u27s running-time is {poly}(n/epsilon), which means that with polynomial-time constructions, only polynomially-small errors are possible. Our main result is a {poly}(n,log(1/epsilon))-time computable two-source condenser. For any k >= {polylog}(n/epsilon), our condenser transforms two independent (n,k)-sources to a distribution over m = k-O(log(1/epsilon)) bits that is epsilon-close to having min-entropy m - o(log(1/epsilon)). Hence, achieving entropy gap of o(log(1/epsilon)). The bottleneck for obtaining low error in recent constructions of two-source extractors lies in the use of resilient functions. Informally, this is a function that receives input bits from r players with the property that the function\u27s output has small bias even if a bounded number of corrupted players feed adversarial inputs after seeing the inputs of the other players. The drawback of using resilient functions is that the error cannot be smaller than ln r/r. This, in return, forces the running time of the construction to be polynomial in 1/epsilon. A key component in our construction is a variant of resilient functions which we call entropy-resilient functions. This variant can be seen as playing the above game for several rounds, each round outputting one bit. The goal of the corrupted players is to reduce, with as high probability as they can, the min-entropy accumulated throughout the rounds. We show that while the bias decreases only polynomially with the number of players in a one-round game, their success probability decreases exponentially in the entropy gap they are attempting to incur in a repeated game

    Biasing Boolean Functions and Collective Coin-Flipping Protocols over Arbitrary Product Distributions

    Get PDF
    The seminal result of Kahn, Kalai and Linial shows that a coalition of O(n/(log n)) players can bias the outcome of any Boolean function {0,1}^n -> {0,1} with respect to the uniform measure. We extend their result to arbitrary product measures on {0,1}^n, by combining their argument with a completely different argument that handles very biased input bits. We view this result as a step towards proving a conjecture of Friedgut, which states that Boolean functions on the continuous cube [0,1]^n (or, equivalently, on {1,...,n}^n) can be biased using coalitions of o(n) players. This is the first step taken in this direction since Friedgut proposed the conjecture in 2004. Russell, Saks and Zuckerman extended the result of Kahn, Kalai and Linial to multi-round protocols, showing that when the number of rounds is o(log^* n), a coalition of o(n) players can bias the outcome with respect to the uniform measure. We extend this result as well to arbitrary product measures on {0,1}^n. The argument of Russell et al. relies on the fact that a coalition of o(n) players can boost the expectation of any Boolean function from epsilon to 1-epsilon with respect to the uniform measure. This fails for general product distributions, as the example of the AND function with respect to mu_{1-1/n} shows. Instead, we use a novel boosting argument alongside a generalization of our first result to arbitrary finite ranges

    Affine Extractors and AC0-Parity

    Get PDF
    We study a simple and general template for constructing affine extractors by composing a linear transformation with resilient functions. Using this we show that good affine extractors can be computed by non-explicit circuits of various types, including AC0-Xor circuits: AC0 circuits with a layer of parity gates at the input. We also show that one-sided extractors can be computed by small DNF-Xor circuits, and separate these circuits from other well-studied classes. As a further motivation for studying DNF-Xor circuits we show that if they can approximate inner product then small AC0-Xor circuits can compute it exactly - a long-standing open problem

    Complexity Theory

    Get PDF
    Computational Complexity Theory is the mathematical study of the intrinsic power and limitations of computational resources like time, space, or randomness. The current workshop focused on recent developments in various sub-areas including arithmetic complexity, Boolean complexity, communication complexity, cryptography, probabilistic proof systems, pseudorandomness and randomness extraction. Many of the developments are related to diverse mathematical fields such as algebraic geometry, combinatorial number theory, probability theory, representation theory, and the theory of error-correcting codes

    Randomness Extraction in AC0 and with Small Locality

    Get PDF
    Randomness extractors, which extract high quality (almost-uniform) random bits from biased random sources, are important objects both in theory and in practice. While there have been significant progress in obtaining near optimal constructions of randomness extractors in various settings, the computational complexity of randomness extractors is still much less studied. In particular, it is not clear whether randomness extractors with good parameters can be computed in several interesting complexity classes that are much weaker than P. In this paper we study randomness extractors in the following two models of computation: (1) constant-depth circuits (AC0), and (2) the local computation model. Previous work in these models, such as [Vio05a], [GVW15] and [BG13], only achieve constructions with weak parameters. In this work we give explicit constructions of randomness extractors with much better parameters. As an application, we use our AC0 extractors to study pseudorandom generators in AC0, and show that we can construct both cryptographic pseudorandom generators (under reasonable computational assumptions) and unconditional pseudorandom generators for space bounded computation with very good parameters. Our constructions combine several previous techniques in randomness extractors, as well as introduce new techniques to reduce or preserve the complexity of extractors, which may be of independent interest. These include (1) a general way to reduce the error of strong seeded extractors while preserving the AC0 property and small locality, and (2) a seeded randomness condenser with small locality.Comment: 62 page

    On Randomness Extraction in AC0

    Get PDF
    We consider randomness extraction by AC0 circuits. The main parameter, n, is the length of the source, and all other parameters are functions of it. The additional extraction parameters are the min-entropy bound k=k(n), the seed length r=r(n), the output length m=m(n), and the (output) deviation bound epsilon=epsilon(n). For k = r+1) is possible if and only if k * r > n/poly(log(n)). For k >= n/log^(O(1))(n), we show that AC0-extraction of r+Omega(r) bits is possible when r=O(log(n)), but leave open the question of whether more bits can be extracted in this case. The impossibility result is for constant epsilon, and the possibility result supports epsilon=1/poly(n). The impossibility result is for (possibly) non-uniform AC0, whereas the possibility result hold for uniform AC0. All our impossibility results hold even for the model of bit-fixing sources, where k coincides with the number of non-fixed (i.e., random) bits. We also consider deterministic AC0 extraction from various classes of restricted sources. In particular, for any constant delta>0delta>0, we give explicit AC0 extractors for poly(1/delta) independent sources that are each of min-entropy rate delta; and four sources suffice for delta=0.99. Also, we give non-explicit AC0 extractors for bit-fixing sources of entropy rate 1/poly(log(n)) (i.e., having n/poly(log(n)) unfixed bits). This shows that the known analysis of the "restriction method" (for making a circuit constant by fixing as few variables as possible) is tight for AC0 even if the restriction is picked deterministically depending on the circuit

    An average-case depth hierarchy theorem for Boolean circuits

    Full text link
    We prove an average-case depth hierarchy theorem for Boolean circuits over the standard basis of AND\mathsf{AND}, OR\mathsf{OR}, and NOT\mathsf{NOT} gates. Our hierarchy theorem says that for every d2d \geq 2, there is an explicit nn-variable Boolean function ff, computed by a linear-size depth-dd formula, which is such that any depth-(d1)(d-1) circuit that agrees with ff on (1/2+on(1))(1/2 + o_n(1)) fraction of all inputs must have size exp(nΩ(1/d)).\exp({n^{\Omega(1/d)}}). This answers an open question posed by H{\aa}stad in his Ph.D. thesis. Our average-case depth hierarchy theorem implies that the polynomial hierarchy is infinite relative to a random oracle with probability 1, confirming a conjecture of H{\aa}stad, Cai, and Babai. We also use our result to show that there is no "approximate converse" to the results of Linial, Mansour, Nisan and Boppana on the total influence of small-depth circuits, thus answering a question posed by O'Donnell, Kalai, and Hatami. A key ingredient in our proof is a notion of \emph{random projections} which generalize random restrictions
    corecore