12 research outputs found
Non-adaptive probabilistic group testing with noisy measurements: Near-optimal bounds with efficient algorithms
We consider the problem of detecting a small subset of defective items from a
large set via non-adaptive "random pooling" group tests. We consider both the
case when the measurements are noiseless, and the case when the measurements
are noisy (the outcome of each group test may be independently faulty with
probability q). Order-optimal results for these scenarios are known in the
literature. We give information-theoretic lower bounds on the query complexity
of these problems, and provide corresponding computationally efficient
algorithms that match the lower bounds up to a constant factor. To the best of
our knowledge this work is the first to explicitly estimate such a constant
that characterizes the gap between the upper and lower bounds for these
problems
A single-photon sampling architecture for solid-state imaging
Advances in solid-state technology have enabled the development of silicon
photomultiplier sensor arrays capable of sensing individual photons. Combined
with high-frequency time-to-digital converters (TDCs), this technology opens up
the prospect of sensors capable of recording with high accuracy both the time
and location of each detected photon. Such a capability could lead to
significant improvements in imaging accuracy, especially for applications
operating with low photon fluxes such as LiDAR and positron emission
tomography.
The demands placed on on-chip readout circuitry imposes stringent trade-offs
between fill factor and spatio-temporal resolution, causing many contemporary
designs to severely underutilize the technology's full potential. Concentrating
on the low photon flux setting, this paper leverages results from group testing
and proposes an architecture for a highly efficient readout of pixels using
only a small number of TDCs, thereby also reducing both cost and power
consumption. The design relies on a multiplexing technique based on binary
interconnection matrices. We provide optimized instances of these matrices for
various sensor parameters and give explicit upper and lower bounds on the
number of TDCs required to uniquely decode a given maximum number of
simultaneous photon arrivals.
To illustrate the strength of the proposed architecture, we note a typical
digitization result of a 120x120 photodiode sensor on a 30um x 30um pitch with
a 40ps time resolution and an estimated fill factor of approximately 70%, using
only 161 TDCs. The design guarantees registration and unique recovery of up to
4 simultaneous photon arrivals using a fast decoding algorithm. In a series of
realistic simulations of scintillation events in clinical positron emission
tomography the design was able to recover the spatio-temporal location of 98.6%
of all photons that caused pixel firings.Comment: 24 pages, 3 figures, 5 table
Near-Optimal Noisy Group Testing via Separate Decoding of Items
The group testing problem consists of determining a small set of defective
items from a larger set of items based on a number of tests, and is relevant in
applications such as medical testing, communication protocols, pattern
matching, and more. In this paper, we revisit an efficient algorithm for noisy
group testing in which each item is decoded separately (Malyutov and Mateev,
1980), and develop novel performance guarantees via an information-theoretic
framework for general noise models. For the special cases of no noise and
symmetric noise, we find that the asymptotic number of tests required for
vanishing error probability is within a factor of the
information-theoretic optimum at low sparsity levels, and that with a small
fraction of allowed incorrectly decoded items, this guarantee extends to all
sublinear sparsity levels. In addition, we provide a converse bound showing
that if one tries to move slightly beyond our low-sparsity achievability
threshold using separate decoding of items and i.i.d. randomized testing, the
average number of items decoded incorrectly approaches that of a trivial
decoder.Comment: Submitted to IEEE Journal of Selected Topics in Signal Processin
Noise-Resilient Group Testing: Limitations and Constructions
We study combinatorial group testing schemes for learning -sparse Boolean
vectors using highly unreliable disjunctive measurements. We consider an
adversarial noise model that only limits the number of false observations, and
show that any noise-resilient scheme in this model can only approximately
reconstruct the sparse vector. On the positive side, we take this barrier to
our advantage and show that approximate reconstruction (within a satisfactory
degree of approximation) allows us to break the information theoretic lower
bound of that is known for exact reconstruction of
-sparse vectors of length via non-adaptive measurements, by a
multiplicative factor .
Specifically, we give simple randomized constructions of non-adaptive
measurement schemes, with measurements, that allow efficient
reconstruction of -sparse vectors up to false positives even in the
presence of false positives and false negatives within the
measurement outcomes, for any constant . We show that, information
theoretically, none of these parameters can be substantially improved without
dramatically affecting the others. Furthermore, we obtain several explicit
constructions, in particular one matching the randomized trade-off but using measurements. We also obtain explicit constructions
that allow fast reconstruction in time \poly(m), which would be sublinear in
for sufficiently sparse vectors. The main tool used in our construction is
the list-decoding view of randomness condensers and extractors.Comment: Full version. A preliminary summary of this work appears (under the
same title) in proceedings of the 17th International Symposium on
Fundamentals of Computation Theory (FCT 2009
Cross-Sender Bit-Mixing Coding
Scheduling to avoid packet collisions is a long-standing challenge in
networking, and has become even trickier in wireless networks with multiple
senders and multiple receivers. In fact, researchers have proved that even {\em
perfect} scheduling can only achieve . Here
is the number of nodes in the network, and is the {\em medium
utilization rate}. Ideally, one would hope to achieve ,
while avoiding all the complexities in scheduling. To this end, this paper
proposes {\em cross-sender bit-mixing coding} ({\em BMC}), which does not rely
on scheduling. Instead, users transmit simultaneously on suitably-chosen slots,
and the amount of overlap in different user's slots is controlled via coding.
We prove that in all possible network topologies, using BMC enables us to
achieve . We also prove that the space and time
complexities of BMC encoding/decoding are all low-order polynomials.Comment: Published in the International Conference on Information Processing
in Sensor Networks (IPSN), 201
Noisy Non-Adaptive Group Testing: A (Near-)Definite Defectives Approach
The group testing problem consists of determining a small set of defective
items from a larger set of items based on a number of possibly-noisy tests, and
is relevant in applications such as medical testing, communication protocols,
pattern matching, and many more. We study the noisy version of the problem,
where the output of each standard noiseless group test is subject to
independent noise, corresponding to passing the noiseless result through a
binary channel. We introduce a class of algorithms that we refer to as
Near-Definite Defectives (NDD), and study bounds on the required number of
tests for vanishing error probability under Bernoulli random test designs. In
addition, we study algorithm-independent converse results, giving lower bounds
on the required number of tests under Bernoulli test designs. Under reverse
-channel noise, the achievable rates and converse results match in a broad
range of sparsity regimes, and under -channel noise, the two match in a
narrower range of dense/low-noise regimes. We observe that although these two
channels have the same Shannon capacity when viewed as a communication channel,
they can behave quite differently when it comes to group testing. Finally, we
extend our analysis of these noise models to the symmetric noise model, and
show improvements over the best known existing bounds in broad scaling regimes.Comment: Submitted to IEEE Transactions on Information Theor
Non-adaptive Group Testing: Explicit bounds and novel algorithms
We consider some computationally efficient and provably correct algorithms
with near-optimal sample-complexity for the problem of noisy non-adaptive group
testing. Group testing involves grouping arbitrary subsets of items into pools.
Each pool is then tested to identify the defective items, which are usually
assumed to be "sparse". We consider non-adaptive randomly pooling measurements,
where pools are selected randomly and independently of the test outcomes. We
also consider a model where noisy measurements allow for both some false
negative and some false positive test outcomes (and also allow for asymmetric
noise, and activation noise). We consider three classes of algorithms for the
group testing problem (we call them specifically the "Coupon Collector
Algorithm", the "Column Matching Algorithms", and the "LP Decoding Algorithms"
-- the last two classes of algorithms (versions of some of which had been
considered before in the literature) were inspired by corresponding algorithms
in the Compressive Sensing literature. The second and third of these algorithms
have several flavours, dealing separately with the noiseless and noisy
measurement scenarios. Our contribution is novel analysis to derive explicit
sample-complexity bounds -- with all constants expressly computed -- for these
algorithms as a function of the desired error probability; the noise
parameters; the number of items; and the size of the defective set (or an upper
bound on it). We also compare the bounds to information-theoretic lower bounds
for sample complexity based on Fano's inequality and show that the upper and
lower bounds are equal up to an explicitly computable universal constant factor
(independent of problem parameters).Comment: Accepted for publication in the IEEE Transactions on Information
Theory; current version, Oct. 9, 2012. Main change from v4 to v5: fixed some
typos, corrected details of the LP decoding algorithm