1,176 research outputs found
Noise-Resilient Group Testing: Limitations and Constructions
We study combinatorial group testing schemes for learning -sparse Boolean
vectors using highly unreliable disjunctive measurements. We consider an
adversarial noise model that only limits the number of false observations, and
show that any noise-resilient scheme in this model can only approximately
reconstruct the sparse vector. On the positive side, we take this barrier to
our advantage and show that approximate reconstruction (within a satisfactory
degree of approximation) allows us to break the information theoretic lower
bound of that is known for exact reconstruction of
-sparse vectors of length via non-adaptive measurements, by a
multiplicative factor .
Specifically, we give simple randomized constructions of non-adaptive
measurement schemes, with measurements, that allow efficient
reconstruction of -sparse vectors up to false positives even in the
presence of false positives and false negatives within the
measurement outcomes, for any constant . We show that, information
theoretically, none of these parameters can be substantially improved without
dramatically affecting the others. Furthermore, we obtain several explicit
constructions, in particular one matching the randomized trade-off but using measurements. We also obtain explicit constructions
that allow fast reconstruction in time \poly(m), which would be sublinear in
for sufficiently sparse vectors. The main tool used in our construction is
the list-decoding view of randomness condensers and extractors.Comment: Full version. A preliminary summary of this work appears (under the
same title) in proceedings of the 17th International Symposium on
Fundamentals of Computation Theory (FCT 2009
Derandomization and Group Testing
The rapid development of derandomization theory, which is a fundamental area
in theoretical computer science, has recently led to many surprising
applications outside its initial intention. We will review some recent such
developments related to combinatorial group testing. In its most basic setting,
the aim of group testing is to identify a set of "positive" individuals in a
population of items by taking groups of items and asking whether there is a
positive in each group.
In particular, we will discuss explicit constructions of optimal or
nearly-optimal group testing schemes using "randomness-conducting" functions.
Among such developments are constructions of error-correcting group testing
schemes using randomness extractors and condensers, as well as threshold group
testing schemes from lossless condensers.Comment: Invited Paper in Proceedings of 48th Annual Allerton Conference on
Communication, Control, and Computing, 201
Randomness Extraction in AC0 and with Small Locality
Randomness extractors, which extract high quality (almost-uniform) random
bits from biased random sources, are important objects both in theory and in
practice. While there have been significant progress in obtaining near optimal
constructions of randomness extractors in various settings, the computational
complexity of randomness extractors is still much less studied. In particular,
it is not clear whether randomness extractors with good parameters can be
computed in several interesting complexity classes that are much weaker than P.
In this paper we study randomness extractors in the following two models of
computation: (1) constant-depth circuits (AC0), and (2) the local computation
model. Previous work in these models, such as [Vio05a], [GVW15] and [BG13],
only achieve constructions with weak parameters. In this work we give explicit
constructions of randomness extractors with much better parameters. As an
application, we use our AC0 extractors to study pseudorandom generators in AC0,
and show that we can construct both cryptographic pseudorandom generators
(under reasonable computational assumptions) and unconditional pseudorandom
generators for space bounded computation with very good parameters.
Our constructions combine several previous techniques in randomness
extractors, as well as introduce new techniques to reduce or preserve the
complexity of extractors, which may be of independent interest. These include
(1) a general way to reduce the error of strong seeded extractors while
preserving the AC0 property and small locality, and (2) a seeded randomness
condenser with small locality.Comment: 62 page
From Low-Distortion Norm Embeddings to Explicit Uncertainty Relations and Efficient Information Locking
The existence of quantum uncertainty relations is the essential reason that
some classically impossible cryptographic primitives become possible when
quantum communication is allowed. One direct operational manifestation of these
uncertainty relations is a purely quantum effect referred to as information
locking. A locking scheme can be viewed as a cryptographic protocol in which a
uniformly random n-bit message is encoded in a quantum system using a classical
key of size much smaller than n. Without the key, no measurement of this
quantum state can extract more than a negligible amount of information about
the message, in which case the message is said to be "locked". Furthermore,
knowing the key, it is possible to recover, that is "unlock", the message. In
this paper, we make the following contributions by exploiting a connection
between uncertainty relations and low-distortion embeddings of L2 into L1. We
introduce the notion of metric uncertainty relations and connect it to
low-distortion embeddings of L2 into L1. A metric uncertainty relation also
implies an entropic uncertainty relation. We prove that random bases satisfy
uncertainty relations with a stronger definition and better parameters than
previously known. Our proof is also considerably simpler than earlier proofs.
We apply this result to show the existence of locking schemes with key size
independent of the message length. We give efficient constructions of metric
uncertainty relations. The bases defining these metric uncertainty relations
are computable by quantum circuits of almost linear size. This leads to the
first explicit construction of a strong information locking scheme. Moreover,
we present a locking scheme that is close to being implementable with current
technology. We apply our metric uncertainty relations to exhibit communication
protocols that perform quantum equality testing.Comment: 60 pages, 5 figures. v4: published versio
Applications of Derandomization Theory in Coding
Randomized techniques play a fundamental role in theoretical computer science
and discrete mathematics, in particular for the design of efficient algorithms
and construction of combinatorial objects. The basic goal in derandomization
theory is to eliminate or reduce the need for randomness in such randomized
constructions. In this thesis, we explore some applications of the fundamental
notions in derandomization theory to problems outside the core of theoretical
computer science, and in particular, certain problems related to coding theory.
First, we consider the wiretap channel problem which involves a communication
system in which an intruder can eavesdrop a limited portion of the
transmissions, and construct efficient and information-theoretically optimal
communication protocols for this model. Then we consider the combinatorial
group testing problem. In this classical problem, one aims to determine a set
of defective items within a large population by asking a number of queries,
where each query reveals whether a defective item is present within a specified
group of items. We use randomness condensers to explicitly construct optimal,
or nearly optimal, group testing schemes for a setting where the query outcomes
can be highly unreliable, as well as the threshold model where a query returns
positive if the number of defectives pass a certain threshold. Finally, we
design ensembles of error-correcting codes that achieve the
information-theoretic capacity of a large class of communication channels, and
then use the obtained ensembles for construction of explicit capacity achieving
codes.
[This is a shortened version of the actual abstract in the thesis.]Comment: EPFL Phd Thesi
Polarization of the Renyi Information Dimension with Applications to Compressed Sensing
In this paper, we show that the Hadamard matrix acts as an extractor over the
reals of the Renyi information dimension (RID), in an analogous way to how it
acts as an extractor of the discrete entropy over finite fields. More
precisely, we prove that the RID of an i.i.d. sequence of mixture random
variables polarizes to the extremal values of 0 and 1 (corresponding to
discrete and continuous distributions) when transformed by a Hadamard matrix.
Further, we prove that the polarization pattern of the RID admits a closed form
expression and follows exactly the Binary Erasure Channel (BEC) polarization
pattern in the discrete setting. We also extend the results from the single- to
the multi-terminal setting, obtaining a Slepian-Wolf counterpart of the RID
polarization. We discuss applications of the RID polarization to Compressed
Sensing of i.i.d. sources. In particular, we use the RID polarization to
construct a family of deterministic -valued sensing matrices for
Compressed Sensing. We run numerical simulations to compare the performance of
the resulting matrices with that of random Gaussian and random Hadamard
matrices. The results indicate that the proposed matrices afford competitive
performances while being explicitly constructed.Comment: 12 pages, 2 figure
Recommended from our members
Using and saving randomness
Randomness is ubiquitous and exceedingly useful in computer science. For example, in sparse recovery, randomized algorithms are more efficient and robust than their deterministic counterparts. At the same time, because random sources from the real world are often biased and defective with limited entropy, high-quality randomness is a precious resource. This motivates the studies of pseudorandomness and randomness extraction. In this thesis, we explore the role of randomness in these areas. Our research contributions broadly fall into two categories: learning structured signals and constructing pseudorandom objects. Learning a structured signal. One common task in audio signal processing is to compress an interval of observation through finding the dominating k frequencies in its Fourier transform. We study the problem of learning a Fourier-sparse signal from noisy samples, where [0, T] is the observation interval and the frequencies can be âoff-gridâ. Previous methods for this problem required the gap between frequencies to be above 1/T, which is necessary to robustly identify individual frequencies. We show that this gap is not necessary to recover the signal as a whole: for arbitrary k-Fourier-sparse signals under ââ bounded noise, we provide a learning algorithm with a constant factor growth of the noise and sample complexity polynomial in k and logarithmic in the bandwidth and signal-to-noise ratio. In addition to this, we introduce a general method to avoid a condition number depending on the signal family F and the distribution D of measurement in the sample vi complexity. In particular, for any linear family F with dimension d and any distribution D over the domain of F, we show that this method provides a robust learning algorithm with O(d log d) samples. Furthermore, we improve the sample complexity to O(d) via spectral sparsification (optimal up to a constant factor), which provides the best known result for a range of linear families such as low degree multivariate polynomials. Next, we generalize this result to an active learning setting, where we get a large number of unlabeled points from an unknown distribution and choose a small subset to label. We design a learning algorithm optimizing both the number of unlabeled points and the number of labels. Pseudorandomness. Next, we study hash families, which have simple forms in theory and efficient implementations in practice. The size of a hash family is crucial for many applications such as derandomization. In this thesis, we study the upper bound on the size of hash families to fulfill their applications in various problems. We first investigate the number of hash functions to constitute a randomness extractor, which is equivalent to the degree of the extractor. We present a general probabilistic method that reduces the degree of any given strong extractor to almost optimal, at least when outputting few bits. For various almost universal hash families including Toeplitz matrices, Linear Congruential Hash, and Multiplicative Universal Hash, this approach significantly improves the upper bound on the degree of strong extractors in these hash families. Then we consider explicit hash families and multiple-choice schemes in the classical problems of placing balls into bins. We construct explicit hash families of almost-polynomial size that derandomizes two classical multiple-choice schemes, which match the maximum loads of a perfectly random hash function.Computer Science
- âŠ