7 research outputs found

    Almost Cover-Free Codes and Designs

    Full text link
    An ss-subset of codewords of a binary code XX is said to be an {\em (s,)(s,\ell)-bad} in XX if the code XX contains a subset of other \ell codewords such that the conjunction of the \ell codewords is covered by the disjunctive sum of the ss codewords. Otherwise, the ss-subset of codewords of XX is said to be an {\em (s,)(s,\ell)-good} in~XX.mA binary code XX is said to be a cover-free (s,)(s,\ell)-code if the code XX does not contain (s,)(s,\ell)-bad subsets. In this paper, we introduce a natural {\em probabilistic} generalization of cover-free (s,)(s,\ell)-codes, namely: a binary code is said to be an almost cover-free (s,)(s,\ell)-code if {\em almost all} ss-subsets of its codewords are (s,)(s,\ell)-good. We discuss the concept of almost cover-free (s,)(s,\ell)-codes arising in combinatorial group testing problems connected with the nonadaptive search of defective supersets (complexes). We develop a random coding method based on the ensemble of binary constant weight codes to obtain lower bounds on the capacity of such codes.Comment: 18 pages, conference pape

    Near-Optimal Noisy Group Testing via Separate Decoding of Items

    Get PDF
    The group testing problem consists of determining a small set of defective items from a larger set of items based on a number of tests, and is relevant in applications such as medical testing, communication protocols, pattern matching, and more. In this paper, we revisit an efficient algorithm for noisy group testing in which each item is decoded separately (Malyutov and Mateev, 1980), and develop novel performance guarantees via an information-theoretic framework for general noise models. For the special cases of no noise and symmetric noise, we find that the asymptotic number of tests required for vanishing error probability is within a factor log20.7\log 2 \approx 0.7 of the information-theoretic optimum at low sparsity levels, and that with a small fraction of allowed incorrectly decoded items, this guarantee extends to all sublinear sparsity levels. In addition, we provide a converse bound showing that if one tries to move slightly beyond our low-sparsity achievability threshold using separate decoding of items and i.i.d. randomized testing, the average number of items decoded incorrectly approaches that of a trivial decoder.Comment: Submitted to IEEE Journal of Selected Topics in Signal Processin

    How little does non-exact recovery help in group testing?

    Get PDF
    We consider the group testing problem, in which one seeks to identify a subset of defective items within a larger set of items based on a number of tests. We characterize the information-theoretic performance limits in the presence of list decoding, in which the decoder may output a list containing more elements than the number of defectives, and the only requirement is that the true defective set is a subset of the list, or more generally, that their overlap exceeds a given threshold. We show that even under this highly relaxed criterion, in several scaling regimes the asymptotic number of tests is no smaller than the exact recovery setting. However, we also provide examples where a reduction is provably attained. We support our theoretical findings with numerical experiments

    Performance of Group Testing Algorithms With Near-Constant Tests-per-Item

    Get PDF
    We consider the nonadaptive group testing with N items, of which K = Θ(Nθ) are defective. We study a test design in which each item appears in nearly the same number of tests. For each item, we independently pick L tests uniformly at random with replacement, and place the item in those tests. We analyse the performance of these designs with simple and practical decoding algorithms in a range of sparsity regimes, and show that the performance is consistently improved in comparison with standard Bernoulli designs.We show that our new design requires roughly 23% fewer tests than a Bernoulli design when paired with the simple decoding algorithms known as COMP and DD. This gives the best known nonadaptive group testing performance for θ > 0:43, and the best proven performance with a practical decoding algorithm for all θ ∈ (0, 1). We also give a converse result showing that the DD algorithm is optimal with respect to our randomised design when θ > 1/2. We complement our theoretical results with simulations that show a notable improvement over Bernoulli designs in both sparse and dense regimes
    corecore