924,439 research outputs found

    Nearly Optimal Sparse Group Testing

    Full text link
    Group testing is the process of pooling arbitrary subsets from a set of nn items so as to identify, with a minimal number of tests, a "small" subset of dd defective items. In "classical" non-adaptive group testing, it is known that when dd is substantially smaller than nn, Θ(dlog(n))\Theta(d\log(n)) tests are both information-theoretically necessary and sufficient to guarantee recovery with high probability. Group testing schemes in the literature meeting this bound require most items to be tested Ω(log(n))\Omega(\log(n)) times, and most tests to incorporate Ω(n/d)\Omega(n/d) items. Motivated by physical considerations, we study group testing models in which the testing procedure is constrained to be "sparse". Specifically, we consider (separately) scenarios in which (a) items are finitely divisible and hence may participate in at most γo(log(n))\gamma \in o(\log(n)) tests; or (b) tests are size-constrained to pool no more than ρo(n/d)\rho \in o(n/d)items per test. For both scenarios we provide information-theoretic lower bounds on the number of tests required to guarantee high probability recovery. In both scenarios we provide both randomized constructions (under both ϵ\epsilon-error and zero-error reconstruction guarantees) and explicit constructions of designs with computationally efficient reconstruction algorithms that require a number of tests that are optimal up to constant or small polynomial factors in some regimes of n,d,γ,n, d, \gamma, and ρ\rho. The randomized design/reconstruction algorithm in the ρ\rho-sized test scenario is universal -- independent of the value of dd, as long as ρo(n/d)\rho \in o(n/d). We also investigate the effect of unreliability/noise in test outcomes. For the full abstract, please see the full text PDF

    Optimal group testing designs for estimating prevalence with uncertain testing errors

    Full text link
    We construct optimal designs for group testing experiments where the goal is to estimate the prevalence of a trait by using a test with uncertain sensitivity and specificity. Using optimal design theory for approximate designs, we show that the most efficient design for simultaneously estimating the prevalence, sensitivity and specificity requires three different group sizes with equal frequencies. However, if estimating prevalence as accurately as possible is the only focus, the optimal strategy is to have three group sizes with unequal frequencies. On the basis of a chlamydia study in the U.S.A., we compare performances of competing designs and provide insights into how the unknown sensitivity and specificity of the test affect the performance of the prevalence estimator. We demonstrate that the locally D- and Ds-optimal designs proposed have high efficiencies even when the prespecified values of the parameters are moderately misspecified

    Optimal Nested Test Plan for Combinatorial Quantitative Group Testing

    Full text link
    We consider the quantitative group testing problem where the objective is to identify defective items in a given population based on results of tests performed on subsets of the population. Under the quantitative group testing model, the result of each test reveals the number of defective items in the tested group. The minimum number of tests achievable by nested test plans was established by Aigner and Schughart in 1985 within a minimax framework. The optimal nested test plan offering this performance, however, was not obtained. In this work, we establish the optimal nested test plan in closed form. This optimal nested test plan is also order optimal among all test plans as the population size approaches infinity. Using heavy-hitter detection as a case study, we show via simulation examples orders of magnitude improvement of the group testing approach over two prevailing sampling-based approaches in detection accuracy and counter consumption. Other applications include anomaly detection and wideband spectrum sensing in cognitive radio systems

    Derandomization and Group Testing

    Full text link
    The rapid development of derandomization theory, which is a fundamental area in theoretical computer science, has recently led to many surprising applications outside its initial intention. We will review some recent such developments related to combinatorial group testing. In its most basic setting, the aim of group testing is to identify a set of "positive" individuals in a population of items by taking groups of items and asking whether there is a positive in each group. In particular, we will discuss explicit constructions of optimal or nearly-optimal group testing schemes using "randomness-conducting" functions. Among such developments are constructions of error-correcting group testing schemes using randomness extractors and condensers, as well as threshold group testing schemes from lossless condensers.Comment: Invited Paper in Proceedings of 48th Annual Allerton Conference on Communication, Control, and Computing, 201

    Group Testing with Random Pools: optimal two-stage algorithms

    Full text link
    We study Probabilistic Group Testing of a set of N items each of which is defective with probability p. We focus on the double limit of small defect probability, p>1, taking either p->0 after NN\to\infty or p=1/Nβp=1/N^{\beta} with β(0,1/2)\beta\in(0,1/2). In both settings the optimal number of tests which are required to identify with certainty the defectives via a two-stage procedure, Tˉ(N,p)\bar T(N,p), is known to scale as NplogpNp|\log p|. Here we determine the sharp asymptotic value of Tˉ(N,p)/(Nplogp)\bar T(N,p)/(Np|\log p|) and construct a class of two-stage algorithms over which this optimal value is attained. This is done by choosing a proper bipartite regular graph (of tests and variable nodes) for the first stage of the detection. Furthermore we prove that this optimal value is also attained on average over a random bipartite graph where all variables have the same degree, while the tests have Poisson-distributed degrees. Finally, we improve the existing upper and lower bound for the optimal number of tests in the case p=1/Nβp=1/N^{\beta} with β[1/2,1)\beta\in[1/2,1).Comment: 12 page
    corecore