4,457 research outputs found

    Near-Optimal Noisy Group Testing via Separate Decoding of Items

    Get PDF
    The group testing problem consists of determining a small set of defective items from a larger set of items based on a number of tests, and is relevant in applications such as medical testing, communication protocols, pattern matching, and more. In this paper, we revisit an efficient algorithm for noisy group testing in which each item is decoded separately (Malyutov and Mateev, 1980), and develop novel performance guarantees via an information-theoretic framework for general noise models. For the special cases of no noise and symmetric noise, we find that the asymptotic number of tests required for vanishing error probability is within a factor log20.7\log 2 \approx 0.7 of the information-theoretic optimum at low sparsity levels, and that with a small fraction of allowed incorrectly decoded items, this guarantee extends to all sublinear sparsity levels. In addition, we provide a converse bound showing that if one tries to move slightly beyond our low-sparsity achievability threshold using separate decoding of items and i.i.d. randomized testing, the average number of items decoded incorrectly approaches that of a trivial decoder.Comment: Submitted to IEEE Journal of Selected Topics in Signal Processin

    Noisy Non-Adaptive Group Testing: A (Near-)Definite Defectives Approach

    Get PDF
    The group testing problem consists of determining a small set of defective items from a larger set of items based on a number of possibly-noisy tests, and is relevant in applications such as medical testing, communication protocols, pattern matching, and many more. We study the noisy version of the problem, where the output of each standard noiseless group test is subject to independent noise, corresponding to passing the noiseless result through a binary channel. We introduce a class of algorithms that we refer to as Near-Definite Defectives (NDD), and study bounds on the required number of tests for vanishing error probability under Bernoulli random test designs. In addition, we study algorithm-independent converse results, giving lower bounds on the required number of tests under Bernoulli test designs. Under reverse ZZ-channel noise, the achievable rates and converse results match in a broad range of sparsity regimes, and under ZZ-channel noise, the two match in a narrower range of dense/low-noise regimes. We observe that although these two channels have the same Shannon capacity when viewed as a communication channel, they can behave quite differently when it comes to group testing. Finally, we extend our analysis of these noise models to the symmetric noise model, and show improvements over the best known existing bounds in broad scaling regimes.Comment: Submitted to IEEE Transactions on Information Theor

    Nearly Optimal Sparse Group Testing

    Full text link
    Group testing is the process of pooling arbitrary subsets from a set of nn items so as to identify, with a minimal number of tests, a "small" subset of dd defective items. In "classical" non-adaptive group testing, it is known that when dd is substantially smaller than nn, Θ(dlog(n))\Theta(d\log(n)) tests are both information-theoretically necessary and sufficient to guarantee recovery with high probability. Group testing schemes in the literature meeting this bound require most items to be tested Ω(log(n))\Omega(\log(n)) times, and most tests to incorporate Ω(n/d)\Omega(n/d) items. Motivated by physical considerations, we study group testing models in which the testing procedure is constrained to be "sparse". Specifically, we consider (separately) scenarios in which (a) items are finitely divisible and hence may participate in at most γo(log(n))\gamma \in o(\log(n)) tests; or (b) tests are size-constrained to pool no more than ρo(n/d)\rho \in o(n/d)items per test. For both scenarios we provide information-theoretic lower bounds on the number of tests required to guarantee high probability recovery. In both scenarios we provide both randomized constructions (under both ϵ\epsilon-error and zero-error reconstruction guarantees) and explicit constructions of designs with computationally efficient reconstruction algorithms that require a number of tests that are optimal up to constant or small polynomial factors in some regimes of n,d,γ,n, d, \gamma, and ρ\rho. The randomized design/reconstruction algorithm in the ρ\rho-sized test scenario is universal -- independent of the value of dd, as long as ρo(n/d)\rho \in o(n/d). We also investigate the effect of unreliability/noise in test outcomes. For the full abstract, please see the full text PDF

    Group testing:an information theory perspective

    Get PDF
    The group testing problem concerns discovering a small number of defective items within a large population by performing tests on pools of items. A test is positive if the pool contains at least one defective, and negative if it contains no defectives. This is a sparse inference problem with a combinatorial flavour, with applications in medical testing, biology, telecommunications, information technology, data science, and more. In this monograph, we survey recent developments in the group testing problem from an information-theoretic perspective. We cover several related developments: efficient algorithms with practical storage and computation requirements, achievability bounds for optimal decoding methods, and algorithm-independent converse bounds. We assess the theoretical guarantees not only in terms of scaling laws, but also in terms of the constant factors, leading to the notion of the {\em rate} of group testing, indicating the amount of information learned per test. Considering both noiseless and noisy settings, we identify several regimes where existing algorithms are provably optimal or near-optimal, as well as regimes where there remains greater potential for improvement. In addition, we survey results concerning a number of variations on the standard group testing problem, including partial recovery criteria, adaptive algorithms with a limited number of stages, constrained test designs, and sublinear-time algorithms.Comment: Survey paper, 140 pages, 19 figures. To be published in Foundations and Trends in Communications and Information Theor

    Non-adaptive Group Testing: Explicit bounds and novel algorithms

    Full text link
    We consider some computationally efficient and provably correct algorithms with near-optimal sample-complexity for the problem of noisy non-adaptive group testing. Group testing involves grouping arbitrary subsets of items into pools. Each pool is then tested to identify the defective items, which are usually assumed to be "sparse". We consider non-adaptive randomly pooling measurements, where pools are selected randomly and independently of the test outcomes. We also consider a model where noisy measurements allow for both some false negative and some false positive test outcomes (and also allow for asymmetric noise, and activation noise). We consider three classes of algorithms for the group testing problem (we call them specifically the "Coupon Collector Algorithm", the "Column Matching Algorithms", and the "LP Decoding Algorithms" -- the last two classes of algorithms (versions of some of which had been considered before in the literature) were inspired by corresponding algorithms in the Compressive Sensing literature. The second and third of these algorithms have several flavours, dealing separately with the noiseless and noisy measurement scenarios. Our contribution is novel analysis to derive explicit sample-complexity bounds -- with all constants expressly computed -- for these algorithms as a function of the desired error probability; the noise parameters; the number of items; and the size of the defective set (or an upper bound on it). We also compare the bounds to information-theoretic lower bounds for sample complexity based on Fano's inequality and show that the upper and lower bounds are equal up to an explicitly computable universal constant factor (independent of problem parameters).Comment: Accepted for publication in the IEEE Transactions on Information Theory; current version, Oct. 9, 2012. Main change from v4 to v5: fixed some typos, corrected details of the LP decoding algorithm
    corecore