8,091 research outputs found

    Efficiently Extracting Randomness from Imperfect Stochastic Processes

    Get PDF
    We study the problem of extracting a prescribed number of random bits by reading the smallest possible number of symbols from non-ideal stochastic processes. The related interval algorithm proposed by Han and Hoshi has asymptotically optimal performance; however, it assumes that the distribution of the input stochastic process is known. The motivation for our work is the fact that, in practice, sources of randomness have inherent correlations and are affected by measurement's noise. Namely, it is hard to obtain an accurate estimation of the distribution. This challenge was addressed by the concepts of seeded and seedless extractors that can handle general random sources with unknown distributions. However, known seeded and seedless extractors provide extraction efficiencies that are substantially smaller than Shannon's entropy limit. Our main contribution is the design of extractors that have a variable input-length and a fixed output length, are efficient in the consumption of symbols from the source, are capable of generating random bits from general stochastic processes and approach the information theoretic upper bound on efficiency.Comment: 2 columns, 16 page

    Non-Malleable Extractors and Codes, with their Many Tampered Extensions

    Get PDF
    Randomness extractors and error correcting codes are fundamental objects in computer science. Recently, there have been several natural generalizations of these objects, in the context and study of tamper resilient cryptography. These are seeded non-malleable extractors, introduced in [DW09]; seedless non-malleable extractors, introduced in [CG14b]; and non-malleable codes, introduced in [DPW10]. However, explicit constructions of non-malleable extractors appear to be hard, and the known constructions are far behind their non-tampered counterparts. In this paper we make progress towards solving the above problems. Our contributions are as follows. (1) We construct an explicit seeded non-malleable extractor for min-entropy klog2nk \geq \log^2 n. This dramatically improves all previous results and gives a simpler 2-round privacy amplification protocol with optimal entropy loss, matching the best known result in [Li15b]. (2) We construct the first explicit non-malleable two-source extractor for min-entropy knnΩ(1)k \geq n-n^{\Omega(1)}, with output size nΩ(1)n^{\Omega(1)} and error 2nΩ(1)2^{-n^{\Omega(1)}}. (3) We initiate the study of two natural generalizations of seedless non-malleable extractors and non-malleable codes, where the sources or the codeword may be tampered many times. We construct the first explicit non-malleable two-source extractor with tampering degree tt up to nΩ(1)n^{\Omega(1)}, which works for min-entropy knnΩ(1)k \geq n-n^{\Omega(1)}, with output size nΩ(1)n^{\Omega(1)} and error 2nΩ(1)2^{-n^{\Omega(1)}}. We show that we can efficiently sample uniformly from any pre-image. By the connection in [CG14b], we also obtain the first explicit non-malleable codes with tampering degree tt up to nΩ(1)n^{\Omega(1)}, relative rate nΩ(1)/nn^{\Omega(1)}/n, and error 2nΩ(1)2^{-n^{\Omega(1)}}.Comment: 50 pages; see paper for full abstrac

    Trevisan's extractor in the presence of quantum side information

    Get PDF
    Randomness extraction involves the processing of purely classical information and is therefore usually studied in the framework of classical probability theory. However, such a classical treatment is generally too restrictive for applications, where side information about the values taken by classical random variables may be represented by the state of a quantum system. This is particularly relevant in the context of cryptography, where an adversary may make use of quantum devices. Here, we show that the well known construction paradigm for extractors proposed by Trevisan is sound in the presence of quantum side information. We exploit the modularity of this paradigm to give several concrete extractor constructions, which, e.g, extract all the conditional (smooth) min-entropy of the source using a seed of length poly-logarithmic in the input, or only require the seed to be weakly random.Comment: 20+10 pages; v2: extract more min-entropy, use weakly random seed; v3: extended introduction, matches published version with sections somewhat reordere

    Three-Source Extractors for Polylogarithmic Min-Entropy

    Full text link
    We continue the study of constructing explicit extractors for independent general weak random sources. The ultimate goal is to give a construction that matches what is given by the probabilistic method --- an extractor for two independent nn-bit weak random sources with min-entropy as small as logn+O(1)\log n+O(1). Previously, the best known result in the two-source case is an extractor by Bourgain \cite{Bourgain05}, which works for min-entropy 0.49n0.49n; and the best known result in the general case is an earlier work of the author \cite{Li13b}, which gives an extractor for a constant number of independent sources with min-entropy polylog(n)\mathsf{polylog(n)}. However, the constant in the construction of \cite{Li13b} depends on the hidden constant in the best known seeded extractor, and can be large; moreover the error in that construction is only 1/poly(n)1/\mathsf{poly(n)}. In this paper, we make two important improvements over the result in \cite{Li13b}. First, we construct an explicit extractor for \emph{three} independent sources on nn bits with min-entropy kpolylog(n)k \geq \mathsf{polylog(n)}. In fact, our extractor works for one independent source with poly-logarithmic min-entropy and another independent block source with two blocks each having poly-logarithmic min-entropy. Thus, our result is nearly optimal, and the next step would be to break the 0.49n0.49n barrier in two-source extractors. Second, we improve the error of the extractor from 1/poly(n)1/\mathsf{poly(n)} to 2kΩ(1)2^{-k^{\Omega(1)}}, which is almost optimal and crucial for cryptographic applications. Some of the techniques developed here may be of independent interests

    Samplers and Extractors for Unbounded Functions

    Get PDF
    Blasiok (SODA\u2718) recently introduced the notion of a subgaussian sampler, defined as an averaging sampler for approximating the mean of functions f from {0,1}^m to the real numbers such that f(U_m) has subgaussian tails, and asked for explicit constructions. In this work, we give the first explicit constructions of subgaussian samplers (and in fact averaging samplers for the broader class of subexponential functions) that match the best known constructions of averaging samplers for [0,1]-bounded functions in the regime of parameters where the approximation error epsilon and failure probability delta are subconstant. Our constructions are established via an extension of the standard notion of randomness extractor (Nisan and Zuckerman, JCSS\u2796) where the error is measured by an arbitrary divergence rather than total variation distance, and a generalization of Zuckerman\u27s equivalence (Random Struct. Alg.\u2797) between extractors and samplers. We believe that the framework we develop, and specifically the notion of an extractor for the Kullback-Leibler (KL) divergence, are of independent interest. In particular, KL-extractors are stronger than both standard extractors and subgaussian samplers, but we show that they exist with essentially the same parameters (constructively and non-constructively) as standard extractors

    Two-Source Condensers with Low Error and Small Entropy Gap via Entropy-Resilient Functions

    Get PDF
    In their seminal work, Chattopadhyay and Zuckerman (STOC\u2716) constructed a two-source extractor with error epsilon for n-bit sources having min-entropy {polylog}(n/epsilon). Unfortunately, the construction\u27s running-time is {poly}(n/epsilon), which means that with polynomial-time constructions, only polynomially-small errors are possible. Our main result is a {poly}(n,log(1/epsilon))-time computable two-source condenser. For any k >= {polylog}(n/epsilon), our condenser transforms two independent (n,k)-sources to a distribution over m = k-O(log(1/epsilon)) bits that is epsilon-close to having min-entropy m - o(log(1/epsilon)). Hence, achieving entropy gap of o(log(1/epsilon)). The bottleneck for obtaining low error in recent constructions of two-source extractors lies in the use of resilient functions. Informally, this is a function that receives input bits from r players with the property that the function\u27s output has small bias even if a bounded number of corrupted players feed adversarial inputs after seeing the inputs of the other players. The drawback of using resilient functions is that the error cannot be smaller than ln r/r. This, in return, forces the running time of the construction to be polynomial in 1/epsilon. A key component in our construction is a variant of resilient functions which we call entropy-resilient functions. This variant can be seen as playing the above game for several rounds, each round outputting one bit. The goal of the corrupted players is to reduce, with as high probability as they can, the min-entropy accumulated throughout the rounds. We show that while the bias decreases only polynomially with the number of players in a one-round game, their success probability decreases exponentially in the entropy gap they are attempting to incur in a repeated game

    Randomness Extraction in AC0 and with Small Locality

    Get PDF
    Randomness extractors, which extract high quality (almost-uniform) random bits from biased random sources, are important objects both in theory and in practice. While there have been significant progress in obtaining near optimal constructions of randomness extractors in various settings, the computational complexity of randomness extractors is still much less studied. In particular, it is not clear whether randomness extractors with good parameters can be computed in several interesting complexity classes that are much weaker than P. In this paper we study randomness extractors in the following two models of computation: (1) constant-depth circuits (AC0), and (2) the local computation model. Previous work in these models, such as [Vio05a], [GVW15] and [BG13], only achieve constructions with weak parameters. In this work we give explicit constructions of randomness extractors with much better parameters. As an application, we use our AC0 extractors to study pseudorandom generators in AC0, and show that we can construct both cryptographic pseudorandom generators (under reasonable computational assumptions) and unconditional pseudorandom generators for space bounded computation with very good parameters. Our constructions combine several previous techniques in randomness extractors, as well as introduce new techniques to reduce or preserve the complexity of extractors, which may be of independent interest. These include (1) a general way to reduce the error of strong seeded extractors while preserving the AC0 property and small locality, and (2) a seeded randomness condenser with small locality.Comment: 62 page
    corecore