17 research outputs found
Performance of Group Testing Algorithms With Near-Constant Tests-per-Item
We consider the nonadaptive group testing with N items, of which K = Θ(Nθ) are defective. We study a test design in which each item appears in nearly the same number of tests. For each item, we independently pick L tests uniformly at random with replacement, and place the item in those tests. We analyse the performance of these designs with simple and practical decoding algorithms in a range of sparsity regimes, and show that the performance is consistently improved in comparison with standard Bernoulli designs.We show that our new design requires roughly 23% fewer tests than a Bernoulli design when paired with the simple decoding algorithms known as COMP and DD. This gives the best known nonadaptive group testing performance for θ > 0:43, and the best proven performance with a practical decoding algorithm for all θ ∈ (0, 1). We also give a converse result showing that the DD algorithm is optimal with respect to our randomised design when θ > 1/2. We complement our theoretical results with simulations that show a notable improvement over Bernoulli designs in both sparse and dense regimes
Nearly Optimal Sparse Group Testing
Group testing is the process of pooling arbitrary subsets from a set of
items so as to identify, with a minimal number of tests, a "small" subset of
defective items. In "classical" non-adaptive group testing, it is known
that when is substantially smaller than , tests are
both information-theoretically necessary and sufficient to guarantee recovery
with high probability. Group testing schemes in the literature meeting this
bound require most items to be tested times, and most tests
to incorporate items.
Motivated by physical considerations, we study group testing models in which
the testing procedure is constrained to be "sparse". Specifically, we consider
(separately) scenarios in which (a) items are finitely divisible and hence may
participate in at most tests; or (b) tests are
size-constrained to pool no more than items per test. For both
scenarios we provide information-theoretic lower bounds on the number of tests
required to guarantee high probability recovery. In both scenarios we provide
both randomized constructions (under both -error and zero-error
reconstruction guarantees) and explicit constructions of designs with
computationally efficient reconstruction algorithms that require a number of
tests that are optimal up to constant or small polynomial factors in some
regimes of and . The randomized design/reconstruction
algorithm in the -sized test scenario is universal -- independent of the
value of , as long as . We also investigate the effect of
unreliability/noise in test outcomes. For the full abstract, please see the
full text PDF
Group Testing with Runlength Constraints for Topological Molecular Storage
Motivated by applications in topological DNA-based data storage, we introduce
and study a novel setting of Non-Adaptive Group Testing (NAGT) with runlength
constraints on the columns of the test matrix, in the sense that any two 1's
must be separated by a run of at least d 0's. We describe and analyze a
probabilistic construction of a runlength-constrained scheme in the zero-error
and vanishing error settings, and show that the number of tests required by
this construction is optimal up to logarithmic factors in the runlength
constraint d and the number of defectives k in both cases. Surprisingly, our
results show that runlength-constrained NAGT is not more demanding than
unconstrained NAGT when d=O(k), and that for almost all choices of d and k it
is not more demanding than NAGT with a column Hamming weight constraint only.
Towards obtaining runlength-constrained Quantitative NAGT (QNAGT) schemes with
good parameters, we also provide lower bounds for this setting and a nearly
optimal probabilistic construction of a QNAGT scheme with a column Hamming
weight constraint