162,030 research outputs found

    Lower bounds for identifying subset members with subset queries

    Full text link
    An instance of a group testing problem is a set of objects \cO and an unknown subset PP of \cO. The task is to determine PP by using queries of the type ``does PP intersect QQ'', where QQ is a subset of \cO. This problem occurs in areas such as fault detection, multiaccess communications, optimal search, blood testing and chromosome mapping. Consider the two stage algorithm for solving a group testing problem. In the first stage a predetermined set of queries are asked in parallel and in the second stage, PP is determined by testing individual objects. Let n=\cardof{\cO}. Suppose that PP is generated by independently adding each x\in \cO to PP with probability p/np/n. Let q1q_1 (q2q_2) be the number of queries asked in the first (second) stage of this algorithm. We show that if q1=o(log(n)log(n)/loglog(n))q_1=o(\log(n)\log(n)/\log\log(n)), then \Exp(q_2) = n^{1-o(1)}, while there exist algorithms with q1=O(log(n)log(n)/loglog(n))q_1 = O(\log(n)\log(n)/\log\log(n)) and \Exp(q_2) = o(1). The proof involves a relaxation technique which can be used with arbitrary distributions. The best previously known bound is q_1+\Exp(q_2) = \Omega(p\log(n)). For general group testing algorithms, our results imply that if the average number of queries over the course of nγn^\gamma (γ>0\gamma>0) independent experiments is O(n1ϵ)O(n^{1-\epsilon}), then with high probability Ω(log(n)log(n)/loglog(n))\Omega(\log(n)\log(n)/\log\log(n)) non-singleton subsets are queried. This settles a conjecture of Bill Bruno and David Torney and has important consequences for the use of group testing in screening DNA libraries and other applications where it is more cost effective to use non-adaptive algorithms and/or too expensive to prepare a subset QQ for its first test.Comment: 9 page

    Constraining the Number of Positive Responses in Adaptive, Non-Adaptive, and Two-Stage Group Testing

    Full text link
    Group testing is a well known search problem that consists in detecting the defective members of a set of objects O by performing tests on properly chosen subsets (pools) of the given set O. In classical group testing the goal is to find all defectives by using as few tests as possible. We consider a variant of classical group testing in which one is concerned not only with minimizing the total number of tests but aims also at reducing the number of tests involving defective elements. The rationale behind this search model is that in many practical applications the devices used for the tests are subject to deterioration due to exposure to or interaction with the defective elements. In this paper we consider adaptive, non-adaptive and two-stage group testing. For all three considered scenarios, we derive upper and lower bounds on the number of "yes" responses that must be admitted by any strategy performing at most a certain number t of tests. In particular, for the adaptive case we provide an algorithm that uses a number of "yes" responses that exceeds the given lower bound by a small constant. Interestingly, this bound can be asymptotically attained also by our two-stage algorithm, which is a phenomenon analogous to the one occurring in classical group testing. For the non-adaptive scenario we give almost matching upper and lower bounds on the number of "yes" responses. In particular, we give two constructions both achieving the same asymptotic bound. An interesting feature of one of these constructions is that it is an explicit construction. The bounds for the non-adaptive and the two-stage cases follow from the bounds on the optimal sizes of new variants of d-cover free families and (p,d)-cover free families introduced in this paper, which we believe may be of interest also in other contexts

    Improved Combinatorial Group Testing Algorithms for Real-World Problem Sizes

    Full text link
    We study practically efficient methods for performing combinatorial group testing. We present efficient non-adaptive and two-stage combinatorial group testing algorithms, which identify the at most d items out of a given set of n items that are defective, using fewer tests for all practical set sizes. For example, our two-stage algorithm matches the information theoretic lower bound for the number of tests in a combinatorial group testing regimen.Comment: 18 pages; an abbreviated version of this paper is to appear at the 9th Worksh. Algorithms and Data Structure

    GROTESQUE: Noisy Group Testing (Quick and Efficient)

    Full text link
    Group-testing refers to the problem of identifying (with high probability) a (small) subset of DD defectives from a (large) set of NN items via a "small" number of "pooled" tests. For ease of presentation in this work we focus on the regime when D = \cO{N^{1-\gap}} for some \gap > 0. The tests may be noiseless or noisy, and the testing procedure may be adaptive (the pool defining a test may depend on the outcome of a previous test), or non-adaptive (each test is performed independent of the outcome of other tests). A rich body of literature demonstrates that Θ(Dlog(N))\Theta(D\log(N)) tests are information-theoretically necessary and sufficient for the group-testing problem, and provides algorithms that achieve this performance. However, it is only recently that reconstruction algorithms with computational complexity that is sub-linear in NN have started being investigated (recent work by \cite{GurI:04,IndN:10, NgoP:11} gave some of the first such algorithms). In the scenario with adaptive tests with noisy outcomes, we present the first scheme that is simultaneously order-optimal (up to small constant factors) in both the number of tests and the decoding complexity (\cO{D\log(N)} in both the performance metrics). The total number of stages of our adaptive algorithm is "small" (\cO{\log(D)}). Similarly, in the scenario with non-adaptive tests with noisy outcomes, we present the first scheme that is simultaneously near-optimal in both the number of tests and the decoding complexity (via an algorithm that requires \cO{D\log(D)\log(N)} tests and has a decoding complexity of {O(D(logN+log2D)){\cal O}(D(\log N+\log^{2}D))}. Finally, we present an adaptive algorithm that only requires 2 stages, and for which both the number of tests and the decoding complexity scale as {O(D(logN+log2D)){\cal O}(D(\log N+\log^{2}D))}. For all three settings the probability of error of our algorithms scales as \cO{1/(poly(D)}.Comment: 26 pages, 5 figure
    corecore