698 research outputs found
Derandomization and Group Testing
The rapid development of derandomization theory, which is a fundamental area
in theoretical computer science, has recently led to many surprising
applications outside its initial intention. We will review some recent such
developments related to combinatorial group testing. In its most basic setting,
the aim of group testing is to identify a set of "positive" individuals in a
population of items by taking groups of items and asking whether there is a
positive in each group.
In particular, we will discuss explicit constructions of optimal or
nearly-optimal group testing schemes using "randomness-conducting" functions.
Among such developments are constructions of error-correcting group testing
schemes using randomness extractors and condensers, as well as threshold group
testing schemes from lossless condensers.Comment: Invited Paper in Proceedings of 48th Annual Allerton Conference on
Communication, Control, and Computing, 201
Two-sources Randomness Extractors for Elliptic Curves
This paper studies the task of two-sources randomness extractors for elliptic
curves defined over finite fields , where can be a prime or a binary
field. In fact, we introduce new constructions of functions over elliptic
curves which take in input two random points from two differents subgroups. In
other words, for a ginven elliptic curve defined over a finite field
and two random points and , where and are two subgroups of
, our function extracts the least significant bits of the
abscissa of the point when is a large prime, and the -first
coefficients of the asbcissa of the point when , where is a prime greater than . We show that the extracted bits
are close to uniform.
Our construction extends some interesting randomness extractors for elliptic
curves, namely those defined in \cite{op} and \cite{ciss1,ciss2}, when
. The proposed constructions can be used in any
cryptographic schemes which require extraction of random bits from two sources
over elliptic curves, namely in key exchange protole, design of strong
pseudo-random number generators, etc
Extensions to the Method of Multiplicities, with applications to Kakeya Sets and Mergers
We extend the "method of multiplicities" to get the following results, of
interest in combinatorics and randomness extraction. (A) We show that every
Kakeya set (a set of points that contains a line in every direction) in
\F_q^n must be of size at least . This bound is tight to within a factor for every as , compared to previous bounds
that were off by exponential factors in . (B) We give improved randomness
extractors and "randomness mergers". Mergers are seeded functions that take as
input (possibly correlated) random variables in and a
short random seed and output a single random variable in that is
statistically close to having entropy when one of the
input variables is distributed uniformly. The seed we require is only
-bits long, which significantly improves upon
previous construction of mergers. (C) Using our new mergers, we show how to
construct randomness extractors that use logarithmic length seeds while
extracting fraction of the min-entropy of the source.
The "method of multiplicities", as used in prior work, analyzed subsets of
vector spaces over finite fields by constructing somewhat low degree
interpolating polynomials that vanish on every point in the subset {\em with
high multiplicity}. The typical use of this method involved showing that the
interpolating polynomial also vanished on some points outside the subset, and
then used simple bounds on the number of zeroes to complete the analysis. Our
augmentation to this technique is that we prove, under appropriate conditions,
that the interpolating polynomial vanishes {\em with high multiplicity} outside
the set. This novelty leads to significantly tighter analyses.Comment: 26 pages, now includes extractors with sublinear entropy los
Trevisan's extractor in the presence of quantum side information
Randomness extraction involves the processing of purely classical information
and is therefore usually studied in the framework of classical probability
theory. However, such a classical treatment is generally too restrictive for
applications, where side information about the values taken by classical random
variables may be represented by the state of a quantum system. This is
particularly relevant in the context of cryptography, where an adversary may
make use of quantum devices. Here, we show that the well known construction
paradigm for extractors proposed by Trevisan is sound in the presence of
quantum side information.
We exploit the modularity of this paradigm to give several concrete extractor
constructions, which, e.g, extract all the conditional (smooth) min-entropy of
the source using a seed of length poly-logarithmic in the input, or only
require the seed to be weakly random.Comment: 20+10 pages; v2: extract more min-entropy, use weakly random seed;
v3: extended introduction, matches published version with sections somewhat
reordere
Efficiently Extracting Randomness from Imperfect Stochastic Processes
We study the problem of extracting a prescribed number of random bits by
reading the smallest possible number of symbols from non-ideal stochastic
processes. The related interval algorithm proposed by Han and Hoshi has
asymptotically optimal performance; however, it assumes that the distribution
of the input stochastic process is known. The motivation for our work is the
fact that, in practice, sources of randomness have inherent correlations and
are affected by measurement's noise. Namely, it is hard to obtain an accurate
estimation of the distribution. This challenge was addressed by the concepts of
seeded and seedless extractors that can handle general random sources with
unknown distributions. However, known seeded and seedless extractors provide
extraction efficiencies that are substantially smaller than Shannon's entropy
limit. Our main contribution is the design of extractors that have a variable
input-length and a fixed output length, are efficient in the consumption of
symbols from the source, are capable of generating random bits from general
stochastic processes and approach the information theoretic upper bound on
efficiency.Comment: 2 columns, 16 page
Noise-Resilient Group Testing: Limitations and Constructions
We study combinatorial group testing schemes for learning -sparse Boolean
vectors using highly unreliable disjunctive measurements. We consider an
adversarial noise model that only limits the number of false observations, and
show that any noise-resilient scheme in this model can only approximately
reconstruct the sparse vector. On the positive side, we take this barrier to
our advantage and show that approximate reconstruction (within a satisfactory
degree of approximation) allows us to break the information theoretic lower
bound of that is known for exact reconstruction of
-sparse vectors of length via non-adaptive measurements, by a
multiplicative factor .
Specifically, we give simple randomized constructions of non-adaptive
measurement schemes, with measurements, that allow efficient
reconstruction of -sparse vectors up to false positives even in the
presence of false positives and false negatives within the
measurement outcomes, for any constant . We show that, information
theoretically, none of these parameters can be substantially improved without
dramatically affecting the others. Furthermore, we obtain several explicit
constructions, in particular one matching the randomized trade-off but using measurements. We also obtain explicit constructions
that allow fast reconstruction in time \poly(m), which would be sublinear in
for sufficiently sparse vectors. The main tool used in our construction is
the list-decoding view of randomness condensers and extractors.Comment: Full version. A preliminary summary of this work appears (under the
same title) in proceedings of the 17th International Symposium on
Fundamentals of Computation Theory (FCT 2009
Linear Transformations for Randomness Extraction
Information-efficient approaches for extracting randomness from imperfect
sources have been extensively studied, but simpler and faster ones are required
in the high-speed applications of random number generation. In this paper, we
focus on linear constructions, namely, applying linear transformation for
randomness extraction. We show that linear transformations based on sparse
random matrices are asymptotically optimal to extract randomness from
independent sources and bit-fixing sources, and they are efficient (may not be
optimal) to extract randomness from hidden Markov sources. Further study
demonstrates the flexibility of such constructions on source models as well as
their excellent information-preserving capabilities. Since linear
transformations based on sparse random matrices are computationally fast and
can be easy to implement using hardware like FPGAs, they are very attractive in
the high-speed applications. In addition, we explore explicit constructions of
transformation matrices. We show that the generator matrices of primitive BCH
codes are good choices, but linear transformations based on such matrices
require more computational time due to their high densities.Comment: 2 columns, 14 page
Simple extractors via constructions of cryptographic pseudo-random generators
Trevisan has shown that constructions of pseudo-random generators from hard
functions (the Nisan-Wigderson approach) also produce extractors. We show that
constructions of pseudo-random generators from one-way permutations (the
Blum-Micali-Yao approach) can be used for building extractors as well. Using
this new technique we build extractors that do not use designs and
polynomial-based error-correcting codes and that are very simple and efficient.
For example, one extractor produces each output bit separately in
time. These extractors work for weak sources with min entropy , for
arbitrary constant , have seed length , and their
output length is .Comment: 21 pages, an extended abstract will appear in Proc. ICALP 2005; small
corrections, some comments and references adde
- …