8,359 research outputs found
Efficient Probabilistic Group Testing Based on Traitor Tracing
Inspired by recent results from collusion-resistant traitor tracing, we
provide a framework for constructing efficient probabilistic group testing
schemes. In the traditional group testing model, our scheme asymptotically
requires T ~ 2 K ln N tests to find (with high probability) the correct set of
K defectives out of N items. The framework is also applied to several noisy
group testing and threshold group testing models, often leading to improvements
over previously known results, but we emphasize that this framework can be
applied to other variants of the classical model as well, both in adaptive and
in non-adaptive settings.Comment: 8 pages, 3 figures, 1 tabl
Non-adaptive Group Testing on Graphs
Grebinski and Kucherov (1998) and Alon et al. (2004-2005) study the problem
of learning a hidden graph for some especial cases, such as hamiltonian cycle,
cliques, stars, and matchings. This problem is motivated by problems in
chemical reactions, molecular biology and genome sequencing.
In this paper, we present a generalization of this problem. Precisely, we
consider a graph G and a subgraph H of G and we assume that G contains exactly
one defective subgraph isomorphic to H. The goal is to find the defective
subgraph by testing whether an induced subgraph contains an edge of the
defective subgraph, with the minimum number of tests. We present an upper bound
for the number of tests to find the defective subgraph by using the symmetric
and high probability variation of Lov\'asz Local Lemma
On Finding a Subset of Healthy Individuals from a Large Population
In this paper, we derive mutual information based upper and lower bounds on
the number of nonadaptive group tests required to identify a given number of
"non defective" items from a large population containing a small number of
"defective" items. We show that a reduction in the number of tests is
achievable compared to the approach of first identifying all the defective
items and then picking the required number of non-defective items from the
complement set. In the asymptotic regime with the population size , to identify non-defective items out of a population
containing defective items, when the tests are reliable, our results show
that measurements are
sufficient, where is a constant independent of and , and
is a bounded function of and . Further, in the nonadaptive group
testing setup, we obtain rigorous upper and lower bounds on the number of tests
under both dilution and additive noise models. Our results are derived using a
general sparse signal model, by virtue of which, they are also applicable to
other important sparse signal based applications such as compressive sensing.Comment: 32 pages, 2 figures, 3 tables, revised version of a paper submitted
to IEEE Trans. Inf. Theor
Computationally Tractable Algorithms for Finding a Subset of Non-defective Items from a Large Population
In the classical non-adaptive group testing setup, pools of items are tested
together, and the main goal of a recovery algorithm is to identify the
"complete defective set" given the outcomes of different group tests. In
contrast, the main goal of a "non-defective subset recovery" algorithm is to
identify a "subset" of non-defective items given the test outcomes. In this
paper, we present a suite of computationally efficient and analytically
tractable non-defective subset recovery algorithms. By analyzing the
probability of error of the algorithms, we obtain bounds on the number of tests
required for non-defective subset recovery with arbitrarily small probability
of error. Our analysis accounts for the impact of both the additive noise
(false positives) and dilution noise (false negatives). By comparing with the
information theoretic lower bounds, we show that the upper bounds on the number
of tests are order-wise tight up to a factor, where is the number
of defective items. We also provide simulation results that compare the
relative performance of the different algorithms and provide further insights
into their practical utility. The proposed algorithms significantly outperform
the straightforward approaches of testing items one-by-one, and of first
identifying the defective set and then choosing the non-defective items from
the complement set, in terms of the number of measurements required to ensure a
given success rate.Comment: In this revision: Unified some proofs and reorganized the paper,
corrected a small mistake in one of the proofs, added more reference
Optimal Nested Test Plan for Combinatorial Quantitative Group Testing
We consider the quantitative group testing problem where the objective is to
identify defective items in a given population based on results of tests
performed on subsets of the population. Under the quantitative group testing
model, the result of each test reveals the number of defective items in the
tested group. The minimum number of tests achievable by nested test plans was
established by Aigner and Schughart in 1985 within a minimax framework. The
optimal nested test plan offering this performance, however, was not obtained.
In this work, we establish the optimal nested test plan in closed form. This
optimal nested test plan is also order optimal among all test plans as the
population size approaches infinity. Using heavy-hitter detection as a case
study, we show via simulation examples orders of magnitude improvement of the
group testing approach over two prevailing sampling-based approaches in
detection accuracy and counter consumption. Other applications include anomaly
detection and wideband spectrum sensing in cognitive radio systems
The Capacity of Adaptive Group Testing
We define capacity for group testing problems and deduce bounds for the
capacity of a variety of noisy models, based on the capacity of equivalent
noisy communication channels. For noiseless adaptive group testing we prove an
information-theoretic lower bound which tightens a bound of Chan et al. This
can be combined with a performance analysis of a version of Hwang's adaptive
group testing algorithm, in order to deduce the capacity of noiseless and
erasure group testing models.Comment: 5 page
Asymptotics of Fingerprinting and Group Testing: Tight Bounds from Channel Capacities
In this work we consider the large-coalition asymptotics of various
fingerprinting and group testing games, and derive explicit expressions for the
capacities for each of these models. We do this both for simple decoders (fast
but suboptimal) and for joint decoders (slow but optimal).
For fingerprinting, we show that if the pirate strategy is known, the
capacity often decreases linearly with the number of colluders, instead of
quadratically as in the uninformed fingerprinting game. For many attacks the
joint capacity is further shown to be strictly higher than the simple capacity.
For group testing, we improve upon known results about the joint capacities,
and derive new explicit asymptotics for the simple capacities. These show that
existing simple group testing algorithms are suboptimal, and that simple
decoders cannot asymptotically be as efficient as joint decoders. For the
traditional group testing model, we show that the gap between the simple and
joint capacities is a factor 1.44 for large numbers of defectives.Comment: 14 pages, 6 figure
Boolean Compressed Sensing and Noisy Group Testing
The fundamental task of group testing is to recover a small distinguished
subset of items from a large population while efficiently reducing the total
number of tests (measurements). The key contribution of this paper is in
adopting a new information-theoretic perspective on group testing problems. We
formulate the group testing problem as a channel coding/decoding problem and
derive a single-letter characterization for the total number of tests used to
identify the defective set. Although the focus of this paper is primarily on
group testing, our main result is generally applicable to other compressive
sensing models.
The single letter characterization is shown to be order-wise tight for many
interesting noisy group testing scenarios. Specifically, we consider an
additive Bernoulli() noise model where we show that, for items and
defectives, the number of tests is for arbitrarily
small average error probability and for a worst case
error criterion. We also consider dilution effects whereby a defective item in
a positive pool might get diluted with probability and potentially missed.
In this case, it is shown that is and
for the average and the worst case error
criteria, respectively. Furthermore, our bounds allow us to verify existing
known bounds for noiseless group testing including the deterministic noise-free
case and approximate reconstruction with bounded distortion. Our proof of
achievability is based on random coding and the analysis of a Maximum
Likelihood Detector, and our information theoretic lower bound is based on
Fano's inequality.Comment: In this revision: reorganized the paper, added citations to related
work, and fixed some bug
- …