231 research outputs found

    Adaptive group testing as channel coding with feedback

    Full text link
    Group testing is the combinatorial problem of identifying the defective items in a population by grouping items into test pools. Recently, nonadaptive group testing - where all the test pools must be decided on at the start - has been studied from an information theory point of view. Using techniques from channel coding, upper and lower bounds have been given on the number of tests required to accurately recover the defective set, even when the test outcomes can be noisy. In this paper, we give the first information theoretic result on adaptive group testing - where the outcome of previous tests can influence the makeup of future tests. We show that adaptive testing does not help much, as the number of tests required obeys the same lower bound as nonadaptive testing. Our proof uses similar techniques to the proof that feedback does not improve channel capacity.Comment: 4 pages, 1 figur

    Noise-Resilient Group Testing: Limitations and Constructions

    Full text link
    We study combinatorial group testing schemes for learning dd-sparse Boolean vectors using highly unreliable disjunctive measurements. We consider an adversarial noise model that only limits the number of false observations, and show that any noise-resilient scheme in this model can only approximately reconstruct the sparse vector. On the positive side, we take this barrier to our advantage and show that approximate reconstruction (within a satisfactory degree of approximation) allows us to break the information theoretic lower bound of Ω~(d2logn)\tilde{\Omega}(d^2 \log n) that is known for exact reconstruction of dd-sparse vectors of length nn via non-adaptive measurements, by a multiplicative factor Ω~(d)\tilde{\Omega}(d). Specifically, we give simple randomized constructions of non-adaptive measurement schemes, with m=O(dlogn)m=O(d \log n) measurements, that allow efficient reconstruction of dd-sparse vectors up to O(d)O(d) false positives even in the presence of δm\delta m false positives and O(m/d)O(m/d) false negatives within the measurement outcomes, for any constant δ<1\delta < 1. We show that, information theoretically, none of these parameters can be substantially improved without dramatically affecting the others. Furthermore, we obtain several explicit constructions, in particular one matching the randomized trade-off but using m=O(d1+o(1)logn)m = O(d^{1+o(1)} \log n) measurements. We also obtain explicit constructions that allow fast reconstruction in time \poly(m), which would be sublinear in nn for sufficiently sparse vectors. The main tool used in our construction is the list-decoding view of randomness condensers and extractors.Comment: Full version. A preliminary summary of this work appears (under the same title) in proceedings of the 17th International Symposium on Fundamentals of Computation Theory (FCT 2009

    Nearly Optimal Sparse Group Testing

    Full text link
    Group testing is the process of pooling arbitrary subsets from a set of nn items so as to identify, with a minimal number of tests, a "small" subset of dd defective items. In "classical" non-adaptive group testing, it is known that when dd is substantially smaller than nn, Θ(dlog(n))\Theta(d\log(n)) tests are both information-theoretically necessary and sufficient to guarantee recovery with high probability. Group testing schemes in the literature meeting this bound require most items to be tested Ω(log(n))\Omega(\log(n)) times, and most tests to incorporate Ω(n/d)\Omega(n/d) items. Motivated by physical considerations, we study group testing models in which the testing procedure is constrained to be "sparse". Specifically, we consider (separately) scenarios in which (a) items are finitely divisible and hence may participate in at most γo(log(n))\gamma \in o(\log(n)) tests; or (b) tests are size-constrained to pool no more than ρo(n/d)\rho \in o(n/d)items per test. For both scenarios we provide information-theoretic lower bounds on the number of tests required to guarantee high probability recovery. In both scenarios we provide both randomized constructions (under both ϵ\epsilon-error and zero-error reconstruction guarantees) and explicit constructions of designs with computationally efficient reconstruction algorithms that require a number of tests that are optimal up to constant or small polynomial factors in some regimes of n,d,γ,n, d, \gamma, and ρ\rho. The randomized design/reconstruction algorithm in the ρ\rho-sized test scenario is universal -- independent of the value of dd, as long as ρo(n/d)\rho \in o(n/d). We also investigate the effect of unreliability/noise in test outcomes. For the full abstract, please see the full text PDF

    A single-photon sampling architecture for solid-state imaging

    Full text link
    Advances in solid-state technology have enabled the development of silicon photomultiplier sensor arrays capable of sensing individual photons. Combined with high-frequency time-to-digital converters (TDCs), this technology opens up the prospect of sensors capable of recording with high accuracy both the time and location of each detected photon. Such a capability could lead to significant improvements in imaging accuracy, especially for applications operating with low photon fluxes such as LiDAR and positron emission tomography. The demands placed on on-chip readout circuitry imposes stringent trade-offs between fill factor and spatio-temporal resolution, causing many contemporary designs to severely underutilize the technology's full potential. Concentrating on the low photon flux setting, this paper leverages results from group testing and proposes an architecture for a highly efficient readout of pixels using only a small number of TDCs, thereby also reducing both cost and power consumption. The design relies on a multiplexing technique based on binary interconnection matrices. We provide optimized instances of these matrices for various sensor parameters and give explicit upper and lower bounds on the number of TDCs required to uniquely decode a given maximum number of simultaneous photon arrivals. To illustrate the strength of the proposed architecture, we note a typical digitization result of a 120x120 photodiode sensor on a 30um x 30um pitch with a 40ps time resolution and an estimated fill factor of approximately 70%, using only 161 TDCs. The design guarantees registration and unique recovery of up to 4 simultaneous photon arrivals using a fast decoding algorithm. In a series of realistic simulations of scintillation events in clinical positron emission tomography the design was able to recover the spatio-temporal location of 98.6% of all photons that caused pixel firings.Comment: 24 pages, 3 figures, 5 table

    Efficiently Decodable Non-Adaptive Threshold Group Testing

    Full text link
    We consider non-adaptive threshold group testing for identification of up to dd defective items in a set of nn items, where a test is positive if it contains at least 2ud2 \leq u \leq d defective items, and negative otherwise. The defective items can be identified using t=O((du)u(ddu)du(ulogdu+log1ϵ)d2logn)t = O \left( \left( \frac{d}{u} \right)^u \left( \frac{d}{d - u} \right)^{d-u} \left(u \log{\frac{d}{u}} + \log{\frac{1}{\epsilon}} \right) \cdot d^2 \log{n} \right) tests with probability at least 1ϵ1 - \epsilon for any ϵ>0\epsilon > 0 or t=O((du)u(ddu)dud3lognlognd)t = O \left( \left( \frac{d}{u} \right)^u \left( \frac{d}{d -u} \right)^{d - u} d^3 \log{n} \cdot \log{\frac{n}{d}} \right) tests with probability 1. The decoding time is t×poly(d2logn)t \times \mathrm{poly}(d^2 \log{n}). This result significantly improves the best known results for decoding non-adaptive threshold group testing: O(nlogn+nlog1ϵ)O(n\log{n} + n \log{\frac{1}{\epsilon}}) for probabilistic decoding, where ϵ>0\epsilon > 0, and O(nulogn)O(n^u \log{n}) for deterministic decoding
    corecore