14 research outputs found
Non-adaptive probabilistic group testing with noisy measurements: Near-optimal bounds with efficient algorithms
We consider the problem of detecting a small subset of defective items from a
large set via non-adaptive "random pooling" group tests. We consider both the
case when the measurements are noiseless, and the case when the measurements
are noisy (the outcome of each group test may be independently faulty with
probability q). Order-optimal results for these scenarios are known in the
literature. We give information-theoretic lower bounds on the query complexity
of these problems, and provide corresponding computationally efficient
algorithms that match the lower bounds up to a constant factor. To the best of
our knowledge this work is the first to explicitly estimate such a constant
that characterizes the gap between the upper and lower bounds for these
problems
Efficiently Decodable Non-Adaptive Threshold Group Testing
We consider non-adaptive threshold group testing for identification of up to
defective items in a set of items, where a test is positive if it
contains at least defective items, and negative otherwise.
The defective items can be identified using tests with
probability at least for any or tests with probability 1. The decoding time is
. This result significantly improves the
best known results for decoding non-adaptive threshold group testing:
for probabilistic decoding, where
, and for deterministic decoding
A framework for generalized group testing with inhibitors and its potential application in neuroscience
The main goal of group testing with inhibitors (GTI) is to efficiently
identify a small number of defective items and inhibitor items in a large set
of items. A test on a subset of items is positive if the subset satisfies some
specific properties. Inhibitor items cancel the effects of defective items,
which often make the outcome of a test containing defective items negative.
Different GTI models can be formulated by considering how specific properties
have different cancellation effects. This work introduces generalized GTI
(GGTI) in which a new type of items is added, i.e., hybrid items. A hybrid item
plays the roles of both defectives items and inhibitor items. Since the number
of instances of GGTI is large (more than 7 million), we introduce a framework
for classifying all types of items non-adaptively, i.e., all tests are designed
in advance. We then explain how GGTI can be used to classify neurons in
neuroscience. Finally, we show how to realize our proposed scheme in practice
Asymptotics of Fingerprinting and Group Testing: Tight Bounds from Channel Capacities
In this work we consider the large-coalition asymptotics of various
fingerprinting and group testing games, and derive explicit expressions for the
capacities for each of these models. We do this both for simple decoders (fast
but suboptimal) and for joint decoders (slow but optimal).
For fingerprinting, we show that if the pirate strategy is known, the
capacity often decreases linearly with the number of colluders, instead of
quadratically as in the uninformed fingerprinting game. For many attacks the
joint capacity is further shown to be strictly higher than the simple capacity.
For group testing, we improve upon known results about the joint capacities,
and derive new explicit asymptotics for the simple capacities. These show that
existing simple group testing algorithms are suboptimal, and that simple
decoders cannot asymptotically be as efficient as joint decoders. For the
traditional group testing model, we show that the gap between the simple and
joint capacities is a factor 1.44 for large numbers of defectives.Comment: 14 pages, 6 figure
Generalized Group Testing
In the problem of classical group testing one aims to identify a small subset
(of size ) diseased individuals/defective items in a large population (of
size ). This process is based on a minimal number of suitably-designed group
tests on subsets of items, where the test outcome is positive iff the given
test contains at least one defective item. Motivated by physical
considerations, we consider a generalized setting that includes as special
cases multiple other group-testing-like models in the literature. In our
setting, which subsumes as special cases a variety of noiseless and noisy
group-testing models in the literature, the test outcome is positive with
probability , where is the number of defectives tested in a pool, and
is an arbitrary monotonically increasing (stochastic) test function.
Our main contributions are as follows.
1. We present a non-adaptive scheme that with probability
identifies all defective items. Our scheme requires at most tests, where is a suitably
defined "sensitivity parameter" of , and is never larger than , but may be substantially smaller for many
.
2. We argue that any testing scheme (including adaptive schemes) needs at
least
tests to ensure reliable recovery. Here is a suitably defined
"concentration parameter" of .
3. We prove that for a variety of
sparse-recovery group-testing models in the literature, and for any other test function
Concomitant Group Testing
In this paper, we introduce a variation of the group testing problem
capturing the idea that a positive test requires a combination of multiple
``types'' of item. Specifically, we assume that there are multiple disjoint
\emph{semi-defective sets}, and a test is positive if and only if it contains
at least one item from each of these sets. The goal is to reliably identify all
of the semi-defective sets using as few tests as possible, and we refer to this
problem as \textit{Concomitant Group Testing} (ConcGT). We derive a variety of
algorithms for this task, focusing primarily on the case that there are two
semi-defective sets. Our algorithms are distinguished by (i) whether they are
deterministic (zero-error) or randomized (small-error), and (ii) whether they
are non-adaptive, fully adaptive, or have limited adaptivity (e.g., 2 or 3
stages). Both our deterministic adaptive algorithm and our randomized
algorithms (non-adaptive or limited adaptivity) are order-optimal in broad
scaling regimes of interest, and improve significantly over baseline results
that are based on solving a more general problem as an intermediate step (e.g.,
hypergraph learning).Comment: 15 pages, 3 figures, 1 tabl