514 research outputs found
Noise-Resilient Group Testing: Limitations and Constructions
We study combinatorial group testing schemes for learning -sparse Boolean
vectors using highly unreliable disjunctive measurements. We consider an
adversarial noise model that only limits the number of false observations, and
show that any noise-resilient scheme in this model can only approximately
reconstruct the sparse vector. On the positive side, we take this barrier to
our advantage and show that approximate reconstruction (within a satisfactory
degree of approximation) allows us to break the information theoretic lower
bound of that is known for exact reconstruction of
-sparse vectors of length via non-adaptive measurements, by a
multiplicative factor .
Specifically, we give simple randomized constructions of non-adaptive
measurement schemes, with measurements, that allow efficient
reconstruction of -sparse vectors up to false positives even in the
presence of false positives and false negatives within the
measurement outcomes, for any constant . We show that, information
theoretically, none of these parameters can be substantially improved without
dramatically affecting the others. Furthermore, we obtain several explicit
constructions, in particular one matching the randomized trade-off but using measurements. We also obtain explicit constructions
that allow fast reconstruction in time \poly(m), which would be sublinear in
for sufficiently sparse vectors. The main tool used in our construction is
the list-decoding view of randomness condensers and extractors.Comment: Full version. A preliminary summary of this work appears (under the
same title) in proceedings of the 17th International Symposium on
Fundamentals of Computation Theory (FCT 2009
Pooling designs with surprisingly high degree of error correction in a finite vector space
Pooling designs are standard experimental tools in many biotechnical
applications. It is well-known that all famous pooling designs are constructed
from mathematical structures by the "containment matrix" method. In particular,
Macula's designs (resp. Ngo and Du's designs) are constructed by the
containment relation of subsets (resp. subspaces) in a finite set (resp. vector
space). Recently, we generalized Macula's designs and obtained a family of
pooling designs with more high degree of error correction by subsets in a
finite set. In this paper, as a generalization of Ngo and Du's designs, we
study the corresponding problems in a finite vector space and obtain a family
of pooling designs with surprisingly high degree of error correction. Our
designs and Ngo and Du's designs have the same number of items and pools,
respectively, but the error-tolerant property is much better than that of Ngo
and Du's designs, which was given by D'yachkov et al. \cite{DF}, when the
dimension of the space is large enough
Lectures on Designing Screening Experiments
Designing Screening Experiments (DSE) is a class of information - theoretical
models for multiple - access channels (MAC). We discuss the combinatorial model
of DSE called a disjunct channel model. This model is the most important for
applications and closely connected with the superimposed code concept. We give
a detailed survey of lower and upper bounds on the rate of superimposed codes.
The best known constructions of superimposed codes are considered in paper. We
also discuss the development of these codes (non-adaptive pooling designs)
intended for the clone - library screening problem. We obtain lower and upper
bounds on the rate of binary codes for the combinatorial model of DSE called an
adder channel model. We also consider the concept of universal decoding for the
probabilistic DSE model called a symmetric model of DSE.Comment: 66 page
Applications of Derandomization Theory in Coding
Randomized techniques play a fundamental role in theoretical computer science
and discrete mathematics, in particular for the design of efficient algorithms
and construction of combinatorial objects. The basic goal in derandomization
theory is to eliminate or reduce the need for randomness in such randomized
constructions. In this thesis, we explore some applications of the fundamental
notions in derandomization theory to problems outside the core of theoretical
computer science, and in particular, certain problems related to coding theory.
First, we consider the wiretap channel problem which involves a communication
system in which an intruder can eavesdrop a limited portion of the
transmissions, and construct efficient and information-theoretically optimal
communication protocols for this model. Then we consider the combinatorial
group testing problem. In this classical problem, one aims to determine a set
of defective items within a large population by asking a number of queries,
where each query reveals whether a defective item is present within a specified
group of items. We use randomness condensers to explicitly construct optimal,
or nearly optimal, group testing schemes for a setting where the query outcomes
can be highly unreliable, as well as the threshold model where a query returns
positive if the number of defectives pass a certain threshold. Finally, we
design ensembles of error-correcting codes that achieve the
information-theoretic capacity of a large class of communication channels, and
then use the obtained ensembles for construction of explicit capacity achieving
codes.
[This is a shortened version of the actual abstract in the thesis.]Comment: EPFL Phd Thesi
Poisson Group Testing: A Probabilistic Model for Boolean Compressed Sensing
We introduce a novel probabilistic group testing framework, termed Poisson
group testing, in which the number of defectives follows a right-truncated
Poisson distribution. The Poisson model has a number of new applications,
including dynamic testing with diminishing relative rates of defectives. We
consider both nonadaptive and semi-adaptive identification methods. For
nonadaptive methods, we derive a lower bound on the number of tests required to
identify the defectives with a probability of error that asymptotically
converges to zero; in addition, we propose test matrix constructions for which
the number of tests closely matches the lower bound. For semi-adaptive methods,
we describe a lower bound on the expected number of tests required to identify
the defectives with zero error probability. In addition, we propose a
stage-wise reconstruction algorithm for which the expected number of tests is
only a constant factor away from the lower bound. The methods rely only on an
estimate of the average number of defectives, rather than on the individual
probabilities of subjects being defective
- …