16,793 research outputs found
A Universal Scheme for Transforming Binary Algorithms to Generate Random Bits from Loaded Dice
In this paper, we present a universal scheme for transforming an arbitrary
algorithm for biased 2-face coins to generate random bits from the general
source of an m-sided die, hence enabling the application of existing algorithms
to general sources. In addition, we study approaches of efficiently generating
a prescribed number of random bits from an arbitrary biased coin. This
contrasts with most existing works, which typically assume that the number of
coin tosses is fixed, and they generate a variable number of random bits.Comment: 2 columns, 10 page
Efficiently Generating Random Bits from Finite State Markov Chains
The problem of random number generation from an uncorrelated random source (of unknown probability distribution) dates back to von Neumann's 1951 work. Elias (1972) generalized von Neumann's scheme and showed how to achieve optimal efficiency in unbiased random bits generation. Hence, a natural question is what if the sources are correlated? Both Elias and Samuelson proposed methods for generating unbiased random bits in the case of correlated sources (of unknown probability distribution), specifically, they considered finite Markov chains. However, their proposed methods are not efficient or have implementation difficulties. Blum (1986) devised an algorithm for efficiently generating random bits from degree-2 finite Markov chains in expected linear time, however, his beautiful method is still far from optimality on information-efficiency. In this paper, we generalize Blum's algorithm to arbitrary degree finite Markov chains and combine it with Elias's method for efficient generation of unbiased bits. As a result, we provide the first known algorithm that generates unbiased random bits from an arbitrary finite Markov chain, operates in expected linear time and achieves the information-theoretic upper bound on efficiency
Efficiently Extracting Randomness from Imperfect Stochastic Processes
We study the problem of extracting a prescribed number of random bits by
reading the smallest possible number of symbols from non-ideal stochastic
processes. The related interval algorithm proposed by Han and Hoshi has
asymptotically optimal performance; however, it assumes that the distribution
of the input stochastic process is known. The motivation for our work is the
fact that, in practice, sources of randomness have inherent correlations and
are affected by measurement's noise. Namely, it is hard to obtain an accurate
estimation of the distribution. This challenge was addressed by the concepts of
seeded and seedless extractors that can handle general random sources with
unknown distributions. However, known seeded and seedless extractors provide
extraction efficiencies that are substantially smaller than Shannon's entropy
limit. Our main contribution is the design of extractors that have a variable
input-length and a fixed output length, are efficient in the consumption of
symbols from the source, are capable of generating random bits from general
stochastic processes and approach the information theoretic upper bound on
efficiency.Comment: 2 columns, 16 page
Linear Transformations for Randomness Extraction
Information-efficient approaches for extracting randomness from imperfect
sources have been extensively studied, but simpler and faster ones are required
in the high-speed applications of random number generation. In this paper, we
focus on linear constructions, namely, applying linear transformation for
randomness extraction. We show that linear transformations based on sparse
random matrices are asymptotically optimal to extract randomness from
independent sources and bit-fixing sources, and they are efficient (may not be
optimal) to extract randomness from hidden Markov sources. Further study
demonstrates the flexibility of such constructions on source models as well as
their excellent information-preserving capabilities. Since linear
transformations based on sparse random matrices are computationally fast and
can be easy to implement using hardware like FPGAs, they are very attractive in
the high-speed applications. In addition, we explore explicit constructions of
transformation matrices. We show that the generator matrices of primitive BCH
codes are good choices, but linear transformations based on such matrices
require more computational time due to their high densities.Comment: 2 columns, 14 page
Synthesis of Stochastic Flow Networks
A stochastic flow network is a directed graph with incoming edges (inputs)
and outgoing edges (outputs), tokens enter through the input edges, travel
stochastically in the network, and can exit the network through the output
edges. Each node in the network is a splitter, namely, a token can enter a node
through an incoming edge and exit on one of the output edges according to a
predefined probability distribution. Stochastic flow networks can be easily
implemented by DNA-based chemical reactions, with promising applications in
molecular computing and stochastic computing. In this paper, we address a
fundamental synthesis question: Given a finite set of possible splitters and an
arbitrary rational probability distribution, design a stochastic flow network,
such that every token that enters the input edge will exit the outputs with the
prescribed probability distribution.
The problem of probability transformation dates back to von Neumann's 1951
work and was followed, among others, by Knuth and Yao in 1976. Most existing
works have been focusing on the "simulation" of target distributions. In this
paper, we design optimal-sized stochastic flow networks for "synthesizing"
target distributions. It shows that when each splitter has two outgoing edges
and is unbiased, an arbitrary rational probability \frac{a}{b} with a\leq b\leq
2^n can be realized by a stochastic flow network of size n that is optimal.
Compared to the other stochastic systems, feedback (cycles in networks)
strongly improves the expressibility of stochastic flow networks.Comment: 2 columns, 15 page
- âŠ