1,401 research outputs found

    Linear Transformations for Randomness Extraction

    Get PDF
    Information-efficient approaches for extracting randomness from imperfect sources have been extensively studied, but simpler and faster ones are required in the high-speed applications of random number generation. In this paper, we focus on linear constructions, namely, applying linear transformation for randomness extraction. We show that linear transformations based on sparse random matrices are asymptotically optimal to extract randomness from independent sources and bit-fixing sources, and they are efficient (may not be optimal) to extract randomness from hidden Markov sources. Further study demonstrates the flexibility of such constructions on source models as well as their excellent information-preserving capabilities. Since linear transformations based on sparse random matrices are computationally fast and can be easy to implement using hardware like FPGAs, they are very attractive in the high-speed applications. In addition, we explore explicit constructions of transformation matrices. We show that the generator matrices of primitive BCH codes are good choices, but linear transformations based on such matrices require more computational time due to their high densities.Comment: 2 columns, 14 page

    Linear extractors for extracting randomness from noisy sources

    Get PDF
    Linear transformations have many applications in information theory, like data compression and error-correcting codes design. In this paper, we study the power of linear transformations in randomness extraction, namely linear extractors, as another important application. Comparing to most existing methods for randomness extraction, linear extractors (especially those constructed with sparse matrices) are computationally fast and can be simply implemented with hardware like FPGAs, which makes them very attractive in practical use. We mainly focus on simple, efficient and sparse constructions of linear extractors. Specifically, we demonstrate that random matrices can generate random bits very efficiently from a variety of noisy sources, including noisy coin sources, bit-fixing sources, noisy (hidden) Markov sources, as well as their mixtures. It shows that low-density random matrices have almost the same efficiency as high-density random matrices when the input sequence is long, which provides a way to simplify hardware/software implementation. Note that although we constructed matrices with randomness, they are deterministic (seedless) extractors - once we constructed them, the same construction can be used for any number of times without using any seeds. Another way to construct linear extractors is based on generator matrices of primitive BCH codes. This method is more explicit, but less practical due to its computational complexity and dimensional constraints

    Dimension Extractors and Optimal Decompression

    Full text link
    A *dimension extractor* is an algorithm designed to increase the effective dimension -- i.e., the amount of computational randomness -- of an infinite binary sequence, in order to turn a "partially random" sequence into a "more random" sequence. Extractors are exhibited for various effective dimensions, including constructive, computable, space-bounded, time-bounded, and finite-state dimension. Using similar techniques, the Kucera-Gacs theorem is examined from the perspective of decompression, by showing that every infinite sequence S is Turing reducible to a Martin-Loef random sequence R such that the asymptotic number of bits of R needed to compute n bits of S, divided by n, is precisely the constructive dimension of S, which is shown to be the optimal ratio of query bits to computed bits achievable with Turing reductions. The extractors and decompressors that are developed lead directly to new characterizations of some effective dimensions in terms of optimal decompression by Turing reductions.Comment: This report was combined with a different conference paper "Every Sequence is Decompressible from a Random One" (cs.IT/0511074, at http://dx.doi.org/10.1007/11780342_17), and both titles were changed, with the conference paper incorporated as section 5 of this new combined paper. The combined paper was accepted to the journal Theory of Computing Systems, as part of a special issue of invited papers from the second conference on Computability in Europe, 200

    Randomness amplification against no-signaling adversaries using two devices

    Get PDF
    Recently, a physically realistic protocol amplifying the randomness of Santha-Vazirani sources producing cryptographically secure random bits was proposed; however for reasons of practical relevance, the crucial question remained open whether this can be accomplished under the minimal conditions necessary for the task. Namely, is it possible to achieve randomness amplification using only two no-signaling components and in a situation where the violation of a Bell inequality only guarantees that some outcomes of the device for specific inputs exhibit randomness? Here, we solve this question and present a device-independent protocol for randomness amplification of Santha-Vazirani sources using a device consisting of two non-signaling components. We show that the protocol can amplify any such source that is not fully deterministic into a fully random source while tolerating a constant noise rate and prove the composable security of the protocol against general no-signaling adversaries. Our main innovation is the proof that even the partial randomness certified by the two-party Bell test (a single input-output pair (u,x\textbf{u}^*, \textbf{x}^*) for which the conditional probability P(xu)P(\textbf{x}^* | \textbf{u}^*) is bounded away from 11 for all no-signaling strategies that optimally violate the Bell inequality) can be used for amplification. We introduce the methodology of a partial tomographic procedure on the empirical statistics obtained in the Bell test that ensures that the outputs constitute a linear min-entropy source of randomness. As a technical novelty that may be of independent interest, we prove that the Santha-Vazirani source satisfies an exponential concentration property given by a recently discovered generalized Chernoff bound.Comment: 15 pages, 3 figure

    Detection-Loophole-Free Test of Quantum Nonlocality, and Applications

    Full text link
    We present a source of entangled photons that violates a Bell inequality free of the "fair-sampling" assumption, by over 7 standard deviations. This violation is the first experiment with photons to close the detection loophole, and we demonstrate enough "efficiency" overhead to eventually perform a fully loophole-free test of local realism. The entanglement quality is verified by maximally violating additional Bell tests, testing the upper limit of quantum correlations. Finally, we use the source to generate secure private quantum random numbers at rates over 4 orders of magnitude beyond previous experiments.Comment: Main text: 5 pages, 2 figures, 1 table. Supplementary Information: 7 pages, 2 figure

    Realistic noise-tolerant randomness amplification using finite number of devices

    Get PDF
    Randomness is a fundamental concept, with implications from security of modern data systems, to fundamental laws of nature and even the philosophy of science. Randomness is called certified if it describes events that cannot be pre-determined by an external adversary. It is known that weak certified randomness can be amplified to nearly ideal randomness using quantum-mechanical systems. However, so far, it was unclear whether randomness amplification is a realistic task, as the existing proposals either do not tolerate noise or require an unbounded number of different devices. Here we provide an error-tolerant protocol using a finite number of devices for amplifying arbitrary weak randomness into nearly perfect random bits, which are secure against a no-signalling adversary. The correctness of the protocol is assessed by violating a Bell inequality, with the degree of violation determining the noise tolerance threshold. An experimental realization of the protocol is within reach of current technology

    Detecting the Boundaries of Urban Areas in India: A Dataset for Pixel-Based Image Classification in Google Earth Engine

    Get PDF
    Urbanization often occurs in an unplanned and uneven manner, resulting in profound changes in patterns of land cover and land use. Understanding these changes is fundamental for devising environmentally responsible approaches to economic development in the rapidly urbanizing countries of the emerging world. One indicator of urbanization is built-up land cover that can be detected and quantified at scale using satellite imagery and cloud-based computational platforms. This process requires reliable and comprehensive ground-truth data for supervised classification and for validation of classification products. We present a new dataset for India, consisting of 21,030 polygons from across the country that were manually classified as “built-up” or “not built-up,” which we use for supervised image classification and detection of urban areas. As a large and geographically diverse country that has been undergoing an urban transition, India represents an ideal context to develop and test approaches for the detection of features related to urbanization. We perform the analysis in Google Earth Engine (GEE) using three types of classifiers, based on imagery from Landsat 7 and Landsat 8 as inputs. The methodology produces high-quality maps of built-up areas across space and time. Although the dataset can facilitate supervised image classification in any platform, we highlight its potential use in GEE for temporal large-scale analysis of the urbanization process. Our methodology can easily be applied to other countries and regions
    corecore