6,219 research outputs found

    A construction of pooling designs with surprisingly high degree of error correction

    Get PDF
    It is well-known that many famous pooling designs are constructed from mathematical structures by the "containment matrix" method. In this paper, we propose another method and obtain a family of pooling designs with surprisingly high degree of error correction based on a finite set. Given the numbers of items and pools, the error-tolerant property of our designs is much better than that of Macula's designs when the size of the set is large enough

    Efficient Two-Stage Group Testing Algorithms for Genetic Screening

    Full text link
    Efficient two-stage group testing algorithms that are particularly suited for rapid and less-expensive DNA library screening and other large scale biological group testing efforts are investigated in this paper. The main focus is on novel combinatorial constructions in order to minimize the number of individual tests at the second stage of a two-stage disjunctive testing procedure. Building on recent work by Levenshtein (2003) and Tonchev (2008), several new infinite classes of such combinatorial designs are presented.Comment: 14 pages; to appear in "Algorithmica". Part of this work has been presented at the ICALP 2011 Group Testing Workshop; arXiv:1106.368

    Pooling designs with surprisingly high degree of error correction in a finite vector space

    Get PDF
    Pooling designs are standard experimental tools in many biotechnical applications. It is well-known that all famous pooling designs are constructed from mathematical structures by the "containment matrix" method. In particular, Macula's designs (resp. Ngo and Du's designs) are constructed by the containment relation of subsets (resp. subspaces) in a finite set (resp. vector space). Recently, we generalized Macula's designs and obtained a family of pooling designs with more high degree of error correction by subsets in a finite set. In this paper, as a generalization of Ngo and Du's designs, we study the corresponding problems in a finite vector space and obtain a family of pooling designs with surprisingly high degree of error correction. Our designs and Ngo and Du's designs have the same number of items and pools, respectively, but the error-tolerant property is much better than that of Ngo and Du's designs, which was given by D'yachkov et al. \cite{DF}, when the dimension of the space is large enough

    Lower bounds for identifying subset members with subset queries

    Full text link
    An instance of a group testing problem is a set of objects \cO and an unknown subset PP of \cO. The task is to determine PP by using queries of the type ``does PP intersect QQ'', where QQ is a subset of \cO. This problem occurs in areas such as fault detection, multiaccess communications, optimal search, blood testing and chromosome mapping. Consider the two stage algorithm for solving a group testing problem. In the first stage a predetermined set of queries are asked in parallel and in the second stage, PP is determined by testing individual objects. Let n=\cardof{\cO}. Suppose that PP is generated by independently adding each x\in \cO to PP with probability p/np/n. Let q1q_1 (q2q_2) be the number of queries asked in the first (second) stage of this algorithm. We show that if q1=o(log(n)log(n)/loglog(n))q_1=o(\log(n)\log(n)/\log\log(n)), then \Exp(q_2) = n^{1-o(1)}, while there exist algorithms with q1=O(log(n)log(n)/loglog(n))q_1 = O(\log(n)\log(n)/\log\log(n)) and \Exp(q_2) = o(1). The proof involves a relaxation technique which can be used with arbitrary distributions. The best previously known bound is q_1+\Exp(q_2) = \Omega(p\log(n)). For general group testing algorithms, our results imply that if the average number of queries over the course of nγn^\gamma (γ>0\gamma>0) independent experiments is O(n1ϵ)O(n^{1-\epsilon}), then with high probability Ω(log(n)log(n)/loglog(n))\Omega(\log(n)\log(n)/\log\log(n)) non-singleton subsets are queried. This settles a conjecture of Bill Bruno and David Torney and has important consequences for the use of group testing in screening DNA libraries and other applications where it is more cost effective to use non-adaptive algorithms and/or too expensive to prepare a subset QQ for its first test.Comment: 9 page

    Noise-Resilient Group Testing: Limitations and Constructions

    Full text link
    We study combinatorial group testing schemes for learning dd-sparse Boolean vectors using highly unreliable disjunctive measurements. We consider an adversarial noise model that only limits the number of false observations, and show that any noise-resilient scheme in this model can only approximately reconstruct the sparse vector. On the positive side, we take this barrier to our advantage and show that approximate reconstruction (within a satisfactory degree of approximation) allows us to break the information theoretic lower bound of Ω~(d2logn)\tilde{\Omega}(d^2 \log n) that is known for exact reconstruction of dd-sparse vectors of length nn via non-adaptive measurements, by a multiplicative factor Ω~(d)\tilde{\Omega}(d). Specifically, we give simple randomized constructions of non-adaptive measurement schemes, with m=O(dlogn)m=O(d \log n) measurements, that allow efficient reconstruction of dd-sparse vectors up to O(d)O(d) false positives even in the presence of δm\delta m false positives and O(m/d)O(m/d) false negatives within the measurement outcomes, for any constant δ<1\delta < 1. We show that, information theoretically, none of these parameters can be substantially improved without dramatically affecting the others. Furthermore, we obtain several explicit constructions, in particular one matching the randomized trade-off but using m=O(d1+o(1)logn)m = O(d^{1+o(1)} \log n) measurements. We also obtain explicit constructions that allow fast reconstruction in time \poly(m), which would be sublinear in nn for sufficiently sparse vectors. The main tool used in our construction is the list-decoding view of randomness condensers and extractors.Comment: Full version. A preliminary summary of this work appears (under the same title) in proceedings of the 17th International Symposium on Fundamentals of Computation Theory (FCT 2009

    A single-photon sampling architecture for solid-state imaging

    Full text link
    Advances in solid-state technology have enabled the development of silicon photomultiplier sensor arrays capable of sensing individual photons. Combined with high-frequency time-to-digital converters (TDCs), this technology opens up the prospect of sensors capable of recording with high accuracy both the time and location of each detected photon. Such a capability could lead to significant improvements in imaging accuracy, especially for applications operating with low photon fluxes such as LiDAR and positron emission tomography. The demands placed on on-chip readout circuitry imposes stringent trade-offs between fill factor and spatio-temporal resolution, causing many contemporary designs to severely underutilize the technology's full potential. Concentrating on the low photon flux setting, this paper leverages results from group testing and proposes an architecture for a highly efficient readout of pixels using only a small number of TDCs, thereby also reducing both cost and power consumption. The design relies on a multiplexing technique based on binary interconnection matrices. We provide optimized instances of these matrices for various sensor parameters and give explicit upper and lower bounds on the number of TDCs required to uniquely decode a given maximum number of simultaneous photon arrivals. To illustrate the strength of the proposed architecture, we note a typical digitization result of a 120x120 photodiode sensor on a 30um x 30um pitch with a 40ps time resolution and an estimated fill factor of approximately 70%, using only 161 TDCs. The design guarantees registration and unique recovery of up to 4 simultaneous photon arrivals using a fast decoding algorithm. In a series of realistic simulations of scintillation events in clinical positron emission tomography the design was able to recover the spatio-temporal location of 98.6% of all photons that caused pixel firings.Comment: 24 pages, 3 figures, 5 table

    Learning Immune-Defectives Graph through Group Tests

    Full text link
    This paper deals with an abstraction of a unified problem of drug discovery and pathogen identification. Pathogen identification involves identification of disease-causing biomolecules. Drug discovery involves finding chemical compounds, called lead compounds, that bind to pathogenic proteins and eventually inhibit the function of the protein. In this paper, the lead compounds are abstracted as inhibitors, pathogenic proteins as defectives, and the mixture of "ineffective" chemical compounds and non-pathogenic proteins as normal items. A defective could be immune to the presence of an inhibitor in a test. So, a test containing a defective is positive iff it does not contain its "associated" inhibitor. The goal of this paper is to identify the defectives, inhibitors, and their "associations" with high probability, or in other words, learn the Immune Defectives Graph (IDG) efficiently through group tests. We propose a probabilistic non-adaptive pooling design, a probabilistic two-stage adaptive pooling design and decoding algorithms for learning the IDG. For the two-stage adaptive-pooling design, we show that the sample complexity of the number of tests required to guarantee recovery of the inhibitors, defectives, and their associations with high probability, i.e., the upper bound, exceeds the proposed lower bound by a logarithmic multiplicative factor in the number of items. For the non-adaptive pooling design too, we show that the upper bound exceeds the proposed lower bound by at most a logarithmic multiplicative factor in the number of items.Comment: Double column, 17 pages. Updated with tighter lower bounds and other minor edit

    An Epitome of Multi Secret Sharing Schemes for General Access Structure

    Full text link
    Secret sharing schemes are widely used now a days in various applications, which need more security, trust and reliability. In secret sharing scheme, the secret is divided among the participants and only authorized set of participants can recover the secret by combining their shares. The authorized set of participants are called access structure of the scheme. In Multi-Secret Sharing Scheme (MSSS), k different secrets are distributed among the participants, each one according to an access structure. Multi-secret sharing schemes have been studied extensively by the cryptographic community. Number of schemes are proposed for the threshold multi-secret sharing and multi-secret sharing according to generalized access structure with various features. In this survey we explore the important constructions of multi-secret sharing for the generalized access structure with their merits and demerits. The features like whether shares can be reused, participants can be enrolled or dis-enrolled efficiently, whether shares have to modified in the renewal phase etc., are considered for the evaluation
    corecore