519 research outputs found
A single-photon sampling architecture for solid-state imaging
Advances in solid-state technology have enabled the development of silicon
photomultiplier sensor arrays capable of sensing individual photons. Combined
with high-frequency time-to-digital converters (TDCs), this technology opens up
the prospect of sensors capable of recording with high accuracy both the time
and location of each detected photon. Such a capability could lead to
significant improvements in imaging accuracy, especially for applications
operating with low photon fluxes such as LiDAR and positron emission
tomography.
The demands placed on on-chip readout circuitry imposes stringent trade-offs
between fill factor and spatio-temporal resolution, causing many contemporary
designs to severely underutilize the technology's full potential. Concentrating
on the low photon flux setting, this paper leverages results from group testing
and proposes an architecture for a highly efficient readout of pixels using
only a small number of TDCs, thereby also reducing both cost and power
consumption. The design relies on a multiplexing technique based on binary
interconnection matrices. We provide optimized instances of these matrices for
various sensor parameters and give explicit upper and lower bounds on the
number of TDCs required to uniquely decode a given maximum number of
simultaneous photon arrivals.
To illustrate the strength of the proposed architecture, we note a typical
digitization result of a 120x120 photodiode sensor on a 30um x 30um pitch with
a 40ps time resolution and an estimated fill factor of approximately 70%, using
only 161 TDCs. The design guarantees registration and unique recovery of up to
4 simultaneous photon arrivals using a fast decoding algorithm. In a series of
realistic simulations of scintillation events in clinical positron emission
tomography the design was able to recover the spatio-temporal location of 98.6%
of all photons that caused pixel firings.Comment: 24 pages, 3 figures, 5 table
Superselectors: Efficient Constructions and Applications
We introduce a new combinatorial structure: the superselector. We show that
superselectors subsume several important combinatorial structures used in the
past few years to solve problems in group testing, compressed sensing,
multi-channel conflict resolution and data security. We prove close upper and
lower bounds on the size of superselectors and we provide efficient algorithms
for their constructions. Albeit our bounds are very general, when they are
instantiated on the combinatorial structures that are particular cases of
superselectors (e.g., (p,k,n)-selectors, (d,\ell)-list-disjunct matrices,
MUT_k(r)-families, FUT(k, a)-families, etc.) they match the best known bounds
in terms of size of the structures (the relevant parameter in the
applications). For appropriate values of parameters, our results also provide
the first efficient deterministic algorithms for the construction of such
structures
Group Testing with Probabilistic Tests: Theory, Design and Application
Identification of defective members of large populations has been widely
studied in the statistics community under the name of group testing. It
involves grouping subsets of items into different pools and detecting defective
members based on the set of test results obtained for each pool.
In a classical noiseless group testing setup, it is assumed that the sampling
procedure is fully known to the reconstruction algorithm, in the sense that the
existence of a defective member in a pool results in the test outcome of that
pool to be positive. However, this may not be always a valid assumption in some
cases of interest. In particular, we consider the case where the defective
items in a pool can become independently inactive with a certain probability.
Hence, one may obtain a negative test result in a pool despite containing some
defective items. As a result, any sampling and reconstruction method should be
able to cope with two different types of uncertainty, i.e., the unknown set of
defective items and the partially unknown, probabilistic testing procedure.
In this work, motivated by the application of detecting infected people in
viral epidemics, we design non-adaptive sampling procedures that allow
successful identification of the defective items through a set of probabilistic
tests. Our design requires only a small number of tests to single out the
defective items. In particular, for a population of size and at most
defective items with activation probability , our results show that tests is sufficient if the sampling procedure should
work for all possible sets of defective items, while
tests is enough to be successful for any single set of defective items.
Moreover, we show that the defective members can be recovered using a simple
reconstruction algorithm with complexity of .Comment: Full version of the conference paper "Compressed Sensing with
Probabilistic Measurements: A Group Testing Solution" appearing in
proceedings of the 47th Annual Allerton Conference on Communication, Control,
and Computing, 2009 (arXiv:0909.3508). To appear in IEEE Transactions on
Information Theor
Construction of Almost Disjunct Matrices for Group Testing
In a \emph{group testing} scheme, a set of tests is designed to identify a
small number of defective items among a large set (of size ) of items.
In the non-adaptive scenario the set of tests has to be designed in one-shot.
In this setting, designing a testing scheme is equivalent to the construction
of a \emph{disjunct matrix}, an matrix where the union of supports
of any columns does not contain the support of any other column. In
principle, one wants to have such a matrix with minimum possible number of
rows (tests). One of the main ways of constructing disjunct matrices relies on
\emph{constant weight error-correcting codes} and their \emph{minimum
distance}. In this paper, we consider a relaxed definition of a disjunct matrix
known as \emph{almost disjunct matrix}. This concept is also studied under the
name of \emph{weakly separated design} in the literature. The relaxed
definition allows one to come up with group testing schemes where a
close-to-one fraction of all possible sets of defective items are identifiable.
Our main contribution is twofold. First, we go beyond the minimum distance
analysis and connect the \emph{average distance} of a constant weight code to
the parameters of an almost disjunct matrix constructed from it. Our second
contribution is to explicitly construct almost disjunct matrices based on our
average distance analysis, that have much smaller number of rows than any
previous explicit construction of disjunct matrices. The parameters of our
construction can be varied to cover a large range of relations for and .Comment: 15 Page
Lectures on Designing Screening Experiments
Designing Screening Experiments (DSE) is a class of information - theoretical
models for multiple - access channels (MAC). We discuss the combinatorial model
of DSE called a disjunct channel model. This model is the most important for
applications and closely connected with the superimposed code concept. We give
a detailed survey of lower and upper bounds on the rate of superimposed codes.
The best known constructions of superimposed codes are considered in paper. We
also discuss the development of these codes (non-adaptive pooling designs)
intended for the clone - library screening problem. We obtain lower and upper
bounds on the rate of binary codes for the combinatorial model of DSE called an
adder channel model. We also consider the concept of universal decoding for the
probabilistic DSE model called a symmetric model of DSE.Comment: 66 page
- âŠ