16,155 research outputs found

    Linear Transformations for Randomness Extraction

    Get PDF
    Information-efficient approaches for extracting randomness from imperfect sources have been extensively studied, but simpler and faster ones are required in the high-speed applications of random number generation. In this paper, we focus on linear constructions, namely, applying linear transformation for randomness extraction. We show that linear transformations based on sparse random matrices are asymptotically optimal to extract randomness from independent sources and bit-fixing sources, and they are efficient (may not be optimal) to extract randomness from hidden Markov sources. Further study demonstrates the flexibility of such constructions on source models as well as their excellent information-preserving capabilities. Since linear transformations based on sparse random matrices are computationally fast and can be easy to implement using hardware like FPGAs, they are very attractive in the high-speed applications. In addition, we explore explicit constructions of transformation matrices. We show that the generator matrices of primitive BCH codes are good choices, but linear transformations based on such matrices require more computational time due to their high densities.Comment: 2 columns, 14 page

    Efficiently Extracting Randomness from Imperfect Stochastic Processes

    Get PDF
    We study the problem of extracting a prescribed number of random bits by reading the smallest possible number of symbols from non-ideal stochastic processes. The related interval algorithm proposed by Han and Hoshi has asymptotically optimal performance; however, it assumes that the distribution of the input stochastic process is known. The motivation for our work is the fact that, in practice, sources of randomness have inherent correlations and are affected by measurement's noise. Namely, it is hard to obtain an accurate estimation of the distribution. This challenge was addressed by the concepts of seeded and seedless extractors that can handle general random sources with unknown distributions. However, known seeded and seedless extractors provide extraction efficiencies that are substantially smaller than Shannon's entropy limit. Our main contribution is the design of extractors that have a variable input-length and a fixed output length, are efficient in the consumption of symbols from the source, are capable of generating random bits from general stochastic processes and approach the information theoretic upper bound on efficiency.Comment: 2 columns, 16 page

    Extracting the Kolmogorov Complexity of Strings and Sequences from Sources with Limited Independence

    Get PDF
    An infinite binary sequence has randomness rate at least σ\sigma if, for almost every nn, the Kolmogorov complexity of its prefix of length nn is at least σn\sigma n. It is known that for every rational σ(0,1)\sigma \in (0,1), on one hand, there exists sequences with randomness rate σ\sigma that can not be effectively transformed into a sequence with randomness rate higher than σ\sigma and, on the other hand, any two independent sequences with randomness rate σ\sigma can be transformed into a sequence with randomness rate higher than σ\sigma. We show that the latter result holds even if the two input sequences have linear dependency (which, informally speaking, means that all prefixes of length nn of the two sequences have in common a constant fraction of their information). The similar problem is studied for finite strings. It is shown that from any two strings with sufficiently large Kolmogorov complexity and sufficiently small dependence, one can effectively construct a string that is random even conditioned by any one of the input strings

    Impossibility of independence amplification in Kolmogorov complexity theory

    Full text link
    The paper studies randomness extraction from sources with bounded independence and the issue of independence amplification of sources, using the framework of Kolmogorov complexity. The dependency of strings xx and yy is dep(x,y)=max{C(x)C(xy),C(y)C(yx)}{\rm dep}(x,y) = \max\{C(x) - C(x \mid y), C(y) - C(y\mid x)\}, where C()C(\cdot) denotes the Kolmogorov complexity. It is shown that there exists a computable Kolmogorov extractor ff such that, for any two nn-bit strings with complexity s(n)s(n) and dependency α(n)\alpha(n), it outputs a string of length s(n)s(n) with complexity s(n)α(n)s(n)- \alpha(n) conditioned by any one of the input strings. It is proven that the above are the optimal parameters a Kolmogorov extractor can achieve. It is shown that independence amplification cannot be effectively realized. Specifically, if (after excluding a trivial case) there exist computable functions f1f_1 and f2f_2 such that dep(f1(x,y),f2(x,y))β(n){\rm dep}(f_1(x,y), f_2(x,y)) \leq \beta(n) for all nn-bit strings xx and yy with dep(x,y)α(n){\rm dep}(x,y) \leq \alpha(n), then β(n)α(n)O(logn)\beta(n) \geq \alpha(n) - O(\log n)

    Physical Randomness Extractors: Generating Random Numbers with Minimal Assumptions

    Get PDF
    How to generate provably true randomness with minimal assumptions? This question is important not only for the efficiency and the security of information processing, but also for understanding how extremely unpredictable events are possible in Nature. All current solutions require special structures in the initial source of randomness, or a certain independence relation among two or more sources. Both types of assumptions are impossible to test and difficult to guarantee in practice. Here we show how this fundamental limit can be circumvented by extractors that base security on the validity of physical laws and extract randomness from untrusted quantum devices. In conjunction with the recent work of Miller and Shi (arXiv:1402:0489), our physical randomness extractor uses just a single and general weak source, produces an arbitrarily long and near-uniform output, with a close-to-optimal error, secure against all-powerful quantum adversaries, and tolerating a constant level of implementation imprecision. The source necessarily needs to be unpredictable to the devices, but otherwise can even be known to the adversary. Our central technical contribution, the Equivalence Lemma, provides a general principle for proving composition security of untrusted-device protocols. It implies that unbounded randomness expansion can be achieved simply by cross-feeding any two expansion protocols. In particular, such an unbounded expansion can be made robust, which is known for the first time. Another significant implication is, it enables the secure randomness generation and key distribution using public randomness, such as that broadcast by NIST's Randomness Beacon. Our protocol also provides a method for refuting local hidden variable theories under a weak assumption on the available randomness for choosing the measurement settings.Comment: A substantial re-writing of V2, especially on model definitions. An abstract model of robustness is added and the robustness claim in V2 is made rigorous. Focuses on quantum-security. A future update is planned to address non-signaling securit

    Trevisan's extractor in the presence of quantum side information

    Get PDF
    Randomness extraction involves the processing of purely classical information and is therefore usually studied in the framework of classical probability theory. However, such a classical treatment is generally too restrictive for applications, where side information about the values taken by classical random variables may be represented by the state of a quantum system. This is particularly relevant in the context of cryptography, where an adversary may make use of quantum devices. Here, we show that the well known construction paradigm for extractors proposed by Trevisan is sound in the presence of quantum side information. We exploit the modularity of this paradigm to give several concrete extractor constructions, which, e.g, extract all the conditional (smooth) min-entropy of the source using a seed of length poly-logarithmic in the input, or only require the seed to be weakly random.Comment: 20+10 pages; v2: extract more min-entropy, use weakly random seed; v3: extended introduction, matches published version with sections somewhat reordere

    Counting dependent and independent strings

    Full text link
    The paper gives estimations for the sizes of the the following sets: (1) the set of strings that have a given dependency with a fixed string, (2) the set of strings that are pairwise \alpha independent, (3) the set of strings that are mutually \alpha independent. The relevant definitions are as follows: C(x) is the Kolmogorov complexity of the string x. A string y has \alpha -dependency with a string x if C(y) - C(y|x) \geq \alpha. A set of strings {x_1, \ldots, x_t} is pairwise \alpha-independent if for all i different from j, C(x_i) - C(x_i | x_j) \leq \alpha. A tuple of strings (x_1, \ldots, x_t) is mutually \alpha-independent if C(x_{\pi(1)} \ldots x_{\pi(t)}) \geq C(x_1) + \ldots + C(x_t) - \alpha, for every permutation \pi of [t]
    corecore