14 research outputs found

    Non-adaptive probabilistic group testing with noisy measurements: Near-optimal bounds with efficient algorithms

    Full text link
    We consider the problem of detecting a small subset of defective items from a large set via non-adaptive "random pooling" group tests. We consider both the case when the measurements are noiseless, and the case when the measurements are noisy (the outcome of each group test may be independently faulty with probability q). Order-optimal results for these scenarios are known in the literature. We give information-theoretic lower bounds on the query complexity of these problems, and provide corresponding computationally efficient algorithms that match the lower bounds up to a constant factor. To the best of our knowledge this work is the first to explicitly estimate such a constant that characterizes the gap between the upper and lower bounds for these problems

    Efficiently Decodable Non-Adaptive Threshold Group Testing

    Full text link
    We consider non-adaptive threshold group testing for identification of up to dd defective items in a set of nn items, where a test is positive if it contains at least 2≀u≀d2 \leq u \leq d defective items, and negative otherwise. The defective items can be identified using t=O((du)u(ddβˆ’u)dβˆ’u(ulog⁑du+log⁑1Ο΅)β‹…d2log⁑n)t = O \left( \left( \frac{d}{u} \right)^u \left( \frac{d}{d - u} \right)^{d-u} \left(u \log{\frac{d}{u}} + \log{\frac{1}{\epsilon}} \right) \cdot d^2 \log{n} \right) tests with probability at least 1βˆ’Ο΅1 - \epsilon for any Ο΅>0\epsilon > 0 or t=O((du)u(ddβˆ’u)dβˆ’ud3log⁑nβ‹…log⁑nd)t = O \left( \left( \frac{d}{u} \right)^u \left( \frac{d}{d -u} \right)^{d - u} d^3 \log{n} \cdot \log{\frac{n}{d}} \right) tests with probability 1. The decoding time is tΓ—poly(d2log⁑n)t \times \mathrm{poly}(d^2 \log{n}). This result significantly improves the best known results for decoding non-adaptive threshold group testing: O(nlog⁑n+nlog⁑1Ο΅)O(n\log{n} + n \log{\frac{1}{\epsilon}}) for probabilistic decoding, where Ο΅>0\epsilon > 0, and O(nulog⁑n)O(n^u \log{n}) for deterministic decoding

    A framework for generalized group testing with inhibitors and its potential application in neuroscience

    Get PDF
    The main goal of group testing with inhibitors (GTI) is to efficiently identify a small number of defective items and inhibitor items in a large set of items. A test on a subset of items is positive if the subset satisfies some specific properties. Inhibitor items cancel the effects of defective items, which often make the outcome of a test containing defective items negative. Different GTI models can be formulated by considering how specific properties have different cancellation effects. This work introduces generalized GTI (GGTI) in which a new type of items is added, i.e., hybrid items. A hybrid item plays the roles of both defectives items and inhibitor items. Since the number of instances of GGTI is large (more than 7 million), we introduce a framework for classifying all types of items non-adaptively, i.e., all tests are designed in advance. We then explain how GGTI can be used to classify neurons in neuroscience. Finally, we show how to realize our proposed scheme in practice

    Asymptotics of Fingerprinting and Group Testing: Tight Bounds from Channel Capacities

    Get PDF
    In this work we consider the large-coalition asymptotics of various fingerprinting and group testing games, and derive explicit expressions for the capacities for each of these models. We do this both for simple decoders (fast but suboptimal) and for joint decoders (slow but optimal). For fingerprinting, we show that if the pirate strategy is known, the capacity often decreases linearly with the number of colluders, instead of quadratically as in the uninformed fingerprinting game. For many attacks the joint capacity is further shown to be strictly higher than the simple capacity. For group testing, we improve upon known results about the joint capacities, and derive new explicit asymptotics for the simple capacities. These show that existing simple group testing algorithms are suboptimal, and that simple decoders cannot asymptotically be as efficient as joint decoders. For the traditional group testing model, we show that the gap between the simple and joint capacities is a factor 1.44 for large numbers of defectives.Comment: 14 pages, 6 figure

    Generalized Group Testing

    Full text link
    In the problem of classical group testing one aims to identify a small subset (of size dd) diseased individuals/defective items in a large population (of size nn). This process is based on a minimal number of suitably-designed group tests on subsets of items, where the test outcome is positive iff the given test contains at least one defective item. Motivated by physical considerations, we consider a generalized setting that includes as special cases multiple other group-testing-like models in the literature. In our setting, which subsumes as special cases a variety of noiseless and noisy group-testing models in the literature, the test outcome is positive with probability f(x)f(x), where xx is the number of defectives tested in a pool, and f(β‹…)f(\cdot) is an arbitrary monotonically increasing (stochastic) test function. Our main contributions are as follows. 1. We present a non-adaptive scheme that with probability 1βˆ’Ξ΅1-\varepsilon identifies all defective items. Our scheme requires at most O(H(f)dlog⁑(nΞ΅)){\cal O}( H(f) d\log\left(\frac{n}{\varepsilon}\right)) tests, where H(f)H(f) is a suitably defined "sensitivity parameter" of f(β‹…)f(\cdot), and is never larger than O(d1+o(1)){\cal O}\left(d^{1+o(1)}\right), but may be substantially smaller for many f(β‹…)f(\cdot). 2. We argue that any testing scheme (including adaptive schemes) needs at least Ξ©((1βˆ’Ξ΅)h(f)dlog⁑(nd))\Omega \left((1-\varepsilon)h(f) d\log\left(\frac n d\right)\right) tests to ensure reliable recovery. Here h(f)β‰₯1h(f) \geq 1 is a suitably defined "concentration parameter" of f(β‹…)f(\cdot). 3. We prove that H(f)h(f)∈Θ(1)\frac{H(f)}{h(f)}\in\Theta(1) for a variety of sparse-recovery group-testing models in the literature, and H(f)h(f)∈O(d1+o(1))\frac {H(f)} {h(f)} \in {\cal O}\left(d^{1+o(1)}\right) for any other test function

    Concomitant Group Testing

    Full text link
    In this paper, we introduce a variation of the group testing problem capturing the idea that a positive test requires a combination of multiple ``types'' of item. Specifically, we assume that there are multiple disjoint \emph{semi-defective sets}, and a test is positive if and only if it contains at least one item from each of these sets. The goal is to reliably identify all of the semi-defective sets using as few tests as possible, and we refer to this problem as \textit{Concomitant Group Testing} (ConcGT). We derive a variety of algorithms for this task, focusing primarily on the case that there are two semi-defective sets. Our algorithms are distinguished by (i) whether they are deterministic (zero-error) or randomized (small-error), and (ii) whether they are non-adaptive, fully adaptive, or have limited adaptivity (e.g., 2 or 3 stages). Both our deterministic adaptive algorithm and our randomized algorithms (non-adaptive or limited adaptivity) are order-optimal in broad scaling regimes of interest, and improve significantly over baseline results that are based on solving a more general problem as an intermediate step (e.g., hypergraph learning).Comment: 15 pages, 3 figures, 1 tabl
    corecore