Group testing is the process of pooling arbitrary subsets from a set of n
items so as to identify, with a minimal number of tests, a "small" subset of
d defective items. In "classical" non-adaptive group testing, it is known
that when d is substantially smaller than n, Θ(dlog(n)) tests are
both information-theoretically necessary and sufficient to guarantee recovery
with high probability. Group testing schemes in the literature meeting this
bound require most items to be tested Ω(log(n)) times, and most tests
to incorporate Ω(n/d) items.
Motivated by physical considerations, we study group testing models in which
the testing procedure is constrained to be "sparse". Specifically, we consider
(separately) scenarios in which (a) items are finitely divisible and hence may
participate in at most γ∈o(log(n)) tests; or (b) tests are
size-constrained to pool no more than ρ∈o(n/d)items per test. For both
scenarios we provide information-theoretic lower bounds on the number of tests
required to guarantee high probability recovery. In both scenarios we provide
both randomized constructions (under both ϵ-error and zero-error
reconstruction guarantees) and explicit constructions of designs with
computationally efficient reconstruction algorithms that require a number of
tests that are optimal up to constant or small polynomial factors in some
regimes of n,d,γ, and ρ. The randomized design/reconstruction
algorithm in the ρ-sized test scenario is universal -- independent of the
value of d, as long as ρ∈o(n/d). We also investigate the effect of
unreliability/noise in test outcomes. For the full abstract, please see the
full text PDF