8,178 research outputs found
Nearly Optimal Sparse Group Testing
Group testing is the process of pooling arbitrary subsets from a set of
items so as to identify, with a minimal number of tests, a "small" subset of
defective items. In "classical" non-adaptive group testing, it is known
that when is substantially smaller than , tests are
both information-theoretically necessary and sufficient to guarantee recovery
with high probability. Group testing schemes in the literature meeting this
bound require most items to be tested times, and most tests
to incorporate items.
Motivated by physical considerations, we study group testing models in which
the testing procedure is constrained to be "sparse". Specifically, we consider
(separately) scenarios in which (a) items are finitely divisible and hence may
participate in at most tests; or (b) tests are
size-constrained to pool no more than items per test. For both
scenarios we provide information-theoretic lower bounds on the number of tests
required to guarantee high probability recovery. In both scenarios we provide
both randomized constructions (under both -error and zero-error
reconstruction guarantees) and explicit constructions of designs with
computationally efficient reconstruction algorithms that require a number of
tests that are optimal up to constant or small polynomial factors in some
regimes of and . The randomized design/reconstruction
algorithm in the -sized test scenario is universal -- independent of the
value of , as long as . We also investigate the effect of
unreliability/noise in test outcomes. For the full abstract, please see the
full text PDF
On Detecting Some Defective Items in Group Testing
Group testing is an approach aimed at identifying up to defective items
among a total of elements. This is accomplished by examining subsets to
determine if at least one defective item is present. In our study, we focus on
the problem of identifying a subset of defective items. We develop
upper and lower bounds on the number of tests required to detect
defective items in both the adaptive and non-adaptive settings while
considering scenarios where no prior knowledge of is available, and
situations where an estimate of or at least some non-trivial upper bound on
is available.
When no prior knowledge on is available, we prove a lower bound of tests in the randomized
non-adaptive settings and an upper bound of for the same
settings. Furthermore, we demonstrate that any non-adaptive deterministic
algorithm must ask tests, signifying a fundamental limitation in
this scenario. For adaptive algorithms, we establish tight bounds in different
scenarios. In the deterministic case, we prove a tight bound of
. Moreover, in the randomized settings, we derive a
tight bound of .
When , or at least some non-trivial estimate of , is known, we prove a
tight bound of for the deterministic non-adaptive
settings, and for the randomized non-adaptive settings.
In the adaptive case, we present an upper bound of for
the deterministic settings, and a lower bound of . Additionally, we establish a tight bound of for
the randomized adaptive settings
Lower bounds for identifying subset members with subset queries
An instance of a group testing problem is a set of objects \cO and an
unknown subset of \cO. The task is to determine by using queries of
the type ``does intersect '', where is a subset of \cO. This
problem occurs in areas such as fault detection, multiaccess communications,
optimal search, blood testing and chromosome mapping. Consider the two stage
algorithm for solving a group testing problem. In the first stage a
predetermined set of queries are asked in parallel and in the second stage,
is determined by testing individual objects. Let n=\cardof{\cO}. Suppose that
is generated by independently adding each x\in \cO to with
probability . Let () be the number of queries asked in the
first (second) stage of this algorithm. We show that if
, then \Exp(q_2) = n^{1-o(1)}, while there
exist algorithms with and \Exp(q_2) =
o(1). The proof involves a relaxation technique which can be used with
arbitrary distributions. The best previously known bound is q_1+\Exp(q_2) =
\Omega(p\log(n)). For general group testing algorithms, our results imply that
if the average number of queries over the course of ()
independent experiments is , then with high probability
non-singleton subsets are queried. This
settles a conjecture of Bill Bruno and David Torney and has important
consequences for the use of group testing in screening DNA libraries and other
applications where it is more cost effective to use non-adaptive algorithms
and/or too expensive to prepare a subset for its first test.Comment: 9 page
GROTESQUE: Noisy Group Testing (Quick and Efficient)
Group-testing refers to the problem of identifying (with high probability) a
(small) subset of defectives from a (large) set of items via a "small"
number of "pooled" tests. For ease of presentation in this work we focus on the
regime when D = \cO{N^{1-\gap}} for some \gap > 0. The tests may be
noiseless or noisy, and the testing procedure may be adaptive (the pool
defining a test may depend on the outcome of a previous test), or non-adaptive
(each test is performed independent of the outcome of other tests). A rich body
of literature demonstrates that tests are
information-theoretically necessary and sufficient for the group-testing
problem, and provides algorithms that achieve this performance. However, it is
only recently that reconstruction algorithms with computational complexity that
is sub-linear in have started being investigated (recent work by
\cite{GurI:04,IndN:10, NgoP:11} gave some of the first such algorithms). In the
scenario with adaptive tests with noisy outcomes, we present the first scheme
that is simultaneously order-optimal (up to small constant factors) in both the
number of tests and the decoding complexity (\cO{D\log(N)} in both the
performance metrics). The total number of stages of our adaptive algorithm is
"small" (\cO{\log(D)}). Similarly, in the scenario with non-adaptive tests
with noisy outcomes, we present the first scheme that is simultaneously
near-optimal in both the number of tests and the decoding complexity (via an
algorithm that requires \cO{D\log(D)\log(N)} tests and has a decoding
complexity of {}. Finally, we present an
adaptive algorithm that only requires 2 stages, and for which both the number
of tests and the decoding complexity scale as {}. For all three settings the probability of error of our
algorithms scales as \cO{1/(poly(D)}.Comment: 26 pages, 5 figure
- β¦