272,438 research outputs found

    Constraining the Number of Positive Responses in Adaptive, Non-Adaptive, and Two-Stage Group Testing

    Full text link
    Group testing is a well known search problem that consists in detecting the defective members of a set of objects O by performing tests on properly chosen subsets (pools) of the given set O. In classical group testing the goal is to find all defectives by using as few tests as possible. We consider a variant of classical group testing in which one is concerned not only with minimizing the total number of tests but aims also at reducing the number of tests involving defective elements. The rationale behind this search model is that in many practical applications the devices used for the tests are subject to deterioration due to exposure to or interaction with the defective elements. In this paper we consider adaptive, non-adaptive and two-stage group testing. For all three considered scenarios, we derive upper and lower bounds on the number of "yes" responses that must be admitted by any strategy performing at most a certain number t of tests. In particular, for the adaptive case we provide an algorithm that uses a number of "yes" responses that exceeds the given lower bound by a small constant. Interestingly, this bound can be asymptotically attained also by our two-stage algorithm, which is a phenomenon analogous to the one occurring in classical group testing. For the non-adaptive scenario we give almost matching upper and lower bounds on the number of "yes" responses. In particular, we give two constructions both achieving the same asymptotic bound. An interesting feature of one of these constructions is that it is an explicit construction. The bounds for the non-adaptive and the two-stage cases follow from the bounds on the optimal sizes of new variants of d-cover free families and (p,d)-cover free families introduced in this paper, which we believe may be of interest also in other contexts

    Non-adaptive probabilistic group testing with noisy measurements: Near-optimal bounds with efficient algorithms

    Full text link
    We consider the problem of detecting a small subset of defective items from a large set via non-adaptive "random pooling" group tests. We consider both the case when the measurements are noiseless, and the case when the measurements are noisy (the outcome of each group test may be independently faulty with probability q). Order-optimal results for these scenarios are known in the literature. We give information-theoretic lower bounds on the query complexity of these problems, and provide corresponding computationally efficient algorithms that match the lower bounds up to a constant factor. To the best of our knowledge this work is the first to explicitly estimate such a constant that characterizes the gap between the upper and lower bounds for these problems

    GROTESQUE: Noisy Group Testing (Quick and Efficient)

    Full text link
    Group-testing refers to the problem of identifying (with high probability) a (small) subset of DD defectives from a (large) set of NN items via a "small" number of "pooled" tests. For ease of presentation in this work we focus on the regime when D = \cO{N^{1-\gap}} for some \gap > 0. The tests may be noiseless or noisy, and the testing procedure may be adaptive (the pool defining a test may depend on the outcome of a previous test), or non-adaptive (each test is performed independent of the outcome of other tests). A rich body of literature demonstrates that Θ(Dlog(N))\Theta(D\log(N)) tests are information-theoretically necessary and sufficient for the group-testing problem, and provides algorithms that achieve this performance. However, it is only recently that reconstruction algorithms with computational complexity that is sub-linear in NN have started being investigated (recent work by \cite{GurI:04,IndN:10, NgoP:11} gave some of the first such algorithms). In the scenario with adaptive tests with noisy outcomes, we present the first scheme that is simultaneously order-optimal (up to small constant factors) in both the number of tests and the decoding complexity (\cO{D\log(N)} in both the performance metrics). The total number of stages of our adaptive algorithm is "small" (\cO{\log(D)}). Similarly, in the scenario with non-adaptive tests with noisy outcomes, we present the first scheme that is simultaneously near-optimal in both the number of tests and the decoding complexity (via an algorithm that requires \cO{D\log(D)\log(N)} tests and has a decoding complexity of {O(D(logN+log2D)){\cal O}(D(\log N+\log^{2}D))}. Finally, we present an adaptive algorithm that only requires 2 stages, and for which both the number of tests and the decoding complexity scale as {O(D(logN+log2D)){\cal O}(D(\log N+\log^{2}D))}. For all three settings the probability of error of our algorithms scales as \cO{1/(poly(D)}.Comment: 26 pages, 5 figure

    Lower bounds for identifying subset members with subset queries

    Full text link
    An instance of a group testing problem is a set of objects \cO and an unknown subset PP of \cO. The task is to determine PP by using queries of the type ``does PP intersect QQ'', where QQ is a subset of \cO. This problem occurs in areas such as fault detection, multiaccess communications, optimal search, blood testing and chromosome mapping. Consider the two stage algorithm for solving a group testing problem. In the first stage a predetermined set of queries are asked in parallel and in the second stage, PP is determined by testing individual objects. Let n=\cardof{\cO}. Suppose that PP is generated by independently adding each x\in \cO to PP with probability p/np/n. Let q1q_1 (q2q_2) be the number of queries asked in the first (second) stage of this algorithm. We show that if q1=o(log(n)log(n)/loglog(n))q_1=o(\log(n)\log(n)/\log\log(n)), then \Exp(q_2) = n^{1-o(1)}, while there exist algorithms with q1=O(log(n)log(n)/loglog(n))q_1 = O(\log(n)\log(n)/\log\log(n)) and \Exp(q_2) = o(1). The proof involves a relaxation technique which can be used with arbitrary distributions. The best previously known bound is q_1+\Exp(q_2) = \Omega(p\log(n)). For general group testing algorithms, our results imply that if the average number of queries over the course of nγn^\gamma (γ>0\gamma>0) independent experiments is O(n1ϵ)O(n^{1-\epsilon}), then with high probability Ω(log(n)log(n)/loglog(n))\Omega(\log(n)\log(n)/\log\log(n)) non-singleton subsets are queried. This settles a conjecture of Bill Bruno and David Torney and has important consequences for the use of group testing in screening DNA libraries and other applications where it is more cost effective to use non-adaptive algorithms and/or too expensive to prepare a subset QQ for its first test.Comment: 9 page

    Group Sequential and Adaptive Designs for Three-Arm 'Gold Standard' Non-Inferiority Trials

    Get PDF
    This thesis deals with the application of group sequential and adaptive methodology in three-arm non-inferiority trials for the case of normally distributed outcomes. Whenever feasible, use of the three-arm design including a test treatment, an active control and a placebo, is recommended by the health authorities. Nevertheless, especially from an ethical point of view, it is desirable to keep the placebo group size as small as possible. After giving a short introduction to two-arm non-inferiority trials, we investigate a hierarchical single-stage testing procedure for three-arm trials which starts by assessing the superiority comparison between test and placebo and then proceeds to the test versus control non-inferiority comparison. Based on formulas for the overall power we derive optimal sample size allocations that minimise the overall sample size. Interestingly, the placebo group size turns out to be very low under the optimal allocation. The optimal fixed sample size designs will then serve both as a starting point and a benchmark for the designs determined later. Subsequently, a general group sequential design for three-arm non-inferiority trials is presented that aims at further minimising the required sample sizes. By choosing different rejection boundaries for the two comparisons we obtain designs with quite different properties. The influence of the boundaries on the operating characteristics such as the expected sample sizes is investigated by means of a comprehensive comparison to the optimal fixed design. Moreover, approximately optimal boundaries are derived for different optimisation criteria such as minimising the placebo group size. It turns out that the implementation of group sequential methodology can further improve the optimal fixed designs, where the potential early termination of the placebo arm is a key advantage that can make the trial more acceptable for patients. After this, the group sequential testing procedure is extended to adaptive designs that allow data-dependent design changes at the interim analysis. In this context, we discuss optimal mid-trial decision-making based on the observed interim data, with a special focus on sample size re-calculation. In doing so, we will make use of the conditional power and the Bayesian predictive power. Our investigations show the advantages of the proposed adaptive designs over the optimal fixed designs. In particular, the possibility to adapt the sample sizes at interim can help to deal with uncertainties regarding the treatment effects, that often exist in the planning stage of three-arm non-inferiority trials. We conclude with a discussion of the results and an outlook on possible future work
    corecore