6 research outputs found

    Information-Theoretic and Algorithmic Thresholds for Group Testing

    Get PDF
    In the group testing problem we aim to identify a small number of infected individuals within a large population. We avail ourselves to a procedure that can test a group of multiple individuals, with the test result coming out positive iff at least one individual in the group is infected. With all tests conducted in parallel, what is the least number of tests required to identify the status of all individuals? In a recent test design [Aldridge et al. 2016] the individuals are assigned to test groups randomly, with every individual joining an equal number of groups. We pinpoint the sharp threshold for the number of tests required in this randomised design so that it is information-theoretically possible to infer the infection status of every individual. Moreover, we analyse two efficient inference algorithms. These results settle conjectures from [Aldridge et al. 2014, Johnson et al. 2019]

    A pooled testing strategy for identifying SARS-CoV-2 at low prevalence

    Get PDF
    Suppressing infections of severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) will probably require the rapid identification and isolation of individuals infected with the virus on an ongoing basis. Reverse-transcription polymerase chain reaction (RT-PCR) tests are accurate but costly, which makes the regular testing of every individual expensive. These costs are a challenge for all countries around the world, but particularly for low-to-middle-income countries. Cost reductions can be achieved by pooling (or combining) subsamples and testing them in groups1-7. A balance must be struck between increasing the group size and retaining test sensitivity, as sample dilution increases the likelihood of false-negative test results for individuals with a low viral load in the sampled region at the time of the test8. Similarly, minimizing the number of tests to reduce costs must be balanced against minimizing the time that testing takes, to reduce the spread of the infection. Here we propose an algorithm for pooling subsamples based on the geometry of a hypercube that, at low prevalence, accurately identifies individuals infected with SARS-CoV-2 in a small number of tests and few rounds of testing. We discuss the optimal group size and explain why, given the highly infectious nature of the disease, largely parallel searches are preferred. We report proof-of-concept experiments in which a positive subsample was detected even when diluted 100-fold with negative subsamples (compared with 30-48-fold dilutions described in previous studies9-11). We quantify the loss of sensitivity due to dilution and discuss how it may be mitigated by the frequent re-testing of groups, for example. With the use of these methods, the cost of mass testing could be reduced by a large factor. At low prevalence, the costs decrease in rough proportion to the prevalence. Field trials of our approach are under way in Rwanda and South Africa. The use of group testing on a massive scale to monitor infection rates closely and continually in a population, along with the rapid and effective isolation of people with SARS-CoV-2 infections, provides a promising pathway towards the long-term control of coronavirus disease 2019 (COVID-19).info:eu-repo/semantics/publishe

    Group testing:an information theory perspective

    Get PDF
    The group testing problem concerns discovering a small number of defective items within a large population by performing tests on pools of items. A test is positive if the pool contains at least one defective, and negative if it contains no defectives. This is a sparse inference problem with a combinatorial flavour, with applications in medical testing, biology, telecommunications, information technology, data science, and more. In this monograph, we survey recent developments in the group testing problem from an information-theoretic perspective. We cover several related developments: efficient algorithms with practical storage and computation requirements, achievability bounds for optimal decoding methods, and algorithm-independent converse bounds. We assess the theoretical guarantees not only in terms of scaling laws, but also in terms of the constant factors, leading to the notion of the {\em rate} of group testing, indicating the amount of information learned per test. Considering both noiseless and noisy settings, we identify several regimes where existing algorithms are provably optimal or near-optimal, as well as regimes where there remains greater potential for improvement. In addition, we survey results concerning a number of variations on the standard group testing problem, including partial recovery criteria, adaptive algorithms with a limited number of stages, constrained test designs, and sublinear-time algorithms.Comment: Survey paper, 140 pages, 19 figures. To be published in Foundations and Trends in Communications and Information Theor
    corecore