30,118 research outputs found
A single-photon sampling architecture for solid-state imaging
Advances in solid-state technology have enabled the development of silicon
photomultiplier sensor arrays capable of sensing individual photons. Combined
with high-frequency time-to-digital converters (TDCs), this technology opens up
the prospect of sensors capable of recording with high accuracy both the time
and location of each detected photon. Such a capability could lead to
significant improvements in imaging accuracy, especially for applications
operating with low photon fluxes such as LiDAR and positron emission
tomography.
The demands placed on on-chip readout circuitry imposes stringent trade-offs
between fill factor and spatio-temporal resolution, causing many contemporary
designs to severely underutilize the technology's full potential. Concentrating
on the low photon flux setting, this paper leverages results from group testing
and proposes an architecture for a highly efficient readout of pixels using
only a small number of TDCs, thereby also reducing both cost and power
consumption. The design relies on a multiplexing technique based on binary
interconnection matrices. We provide optimized instances of these matrices for
various sensor parameters and give explicit upper and lower bounds on the
number of TDCs required to uniquely decode a given maximum number of
simultaneous photon arrivals.
To illustrate the strength of the proposed architecture, we note a typical
digitization result of a 120x120 photodiode sensor on a 30um x 30um pitch with
a 40ps time resolution and an estimated fill factor of approximately 70%, using
only 161 TDCs. The design guarantees registration and unique recovery of up to
4 simultaneous photon arrivals using a fast decoding algorithm. In a series of
realistic simulations of scintillation events in clinical positron emission
tomography the design was able to recover the spatio-temporal location of 98.6%
of all photons that caused pixel firings.Comment: 24 pages, 3 figures, 5 table
Noise-Resilient Group Testing: Limitations and Constructions
We study combinatorial group testing schemes for learning -sparse Boolean
vectors using highly unreliable disjunctive measurements. We consider an
adversarial noise model that only limits the number of false observations, and
show that any noise-resilient scheme in this model can only approximately
reconstruct the sparse vector. On the positive side, we take this barrier to
our advantage and show that approximate reconstruction (within a satisfactory
degree of approximation) allows us to break the information theoretic lower
bound of that is known for exact reconstruction of
-sparse vectors of length via non-adaptive measurements, by a
multiplicative factor .
Specifically, we give simple randomized constructions of non-adaptive
measurement schemes, with measurements, that allow efficient
reconstruction of -sparse vectors up to false positives even in the
presence of false positives and false negatives within the
measurement outcomes, for any constant . We show that, information
theoretically, none of these parameters can be substantially improved without
dramatically affecting the others. Furthermore, we obtain several explicit
constructions, in particular one matching the randomized trade-off but using measurements. We also obtain explicit constructions
that allow fast reconstruction in time \poly(m), which would be sublinear in
for sufficiently sparse vectors. The main tool used in our construction is
the list-decoding view of randomness condensers and extractors.Comment: Full version. A preliminary summary of this work appears (under the
same title) in proceedings of the 17th International Symposium on
Fundamentals of Computation Theory (FCT 2009
Cross-Sender Bit-Mixing Coding
Scheduling to avoid packet collisions is a long-standing challenge in
networking, and has become even trickier in wireless networks with multiple
senders and multiple receivers. In fact, researchers have proved that even {\em
perfect} scheduling can only achieve . Here
is the number of nodes in the network, and is the {\em medium
utilization rate}. Ideally, one would hope to achieve ,
while avoiding all the complexities in scheduling. To this end, this paper
proposes {\em cross-sender bit-mixing coding} ({\em BMC}), which does not rely
on scheduling. Instead, users transmit simultaneously on suitably-chosen slots,
and the amount of overlap in different user's slots is controlled via coding.
We prove that in all possible network topologies, using BMC enables us to
achieve . We also prove that the space and time
complexities of BMC encoding/decoding are all low-order polynomials.Comment: Published in the International Conference on Information Processing
in Sensor Networks (IPSN), 201
Binding and Normalization of Binary Sparse Distributed Representations by Context-Dependent Thinning
Distributed representations were often criticized as inappropriate for encoding of data with a complex structure. However Plate's Holographic Reduced Representations and Kanerva's Binary Spatter Codes are recent schemes that allow on-the-fly encoding of nested compositional structures by real-valued or dense binary vectors of fixed dimensionality.
In this paper we consider procedures of the Context-Dependent Thinning which were developed for representation of complex hierarchical items in the architecture of Associative-Projective Neural Networks. These procedures provide binding of items represented by sparse binary codevectors (with low probability of 1s). Such an encoding is biologically plausible and allows a high storage capacity of distributed associative memory where the codevectors may be stored.
In contrast to known binding procedures, Context-Dependent Thinning preserves the same low density (or sparseness) of the bound codevector for varied number of component codevectors. Besides, a bound codevector is not only similar to another one with similar component codevectors (as in other schemes), but it is also similar to the component codevectors themselves. This allows the similarity of structures to be estimated just by the overlap of their codevectors, without retrieval of the component codevectors. This also allows an easy retrieval of the component codevectors.
Examples of algorithmic and neural-network implementations of the thinning procedures are considered. We also present representation examples for various types of nested structured data (propositions using role-filler and predicate-arguments representation schemes, trees, directed acyclic graphs) using sparse codevectors of fixed dimension. Such representations may provide a fruitful alternative to the symbolic representations of traditional AI, as well as to the localist and microfeature-based connectionist representations
Construction of Almost Disjunct Matrices for Group Testing
In a \emph{group testing} scheme, a set of tests is designed to identify a
small number of defective items among a large set (of size ) of items.
In the non-adaptive scenario the set of tests has to be designed in one-shot.
In this setting, designing a testing scheme is equivalent to the construction
of a \emph{disjunct matrix}, an matrix where the union of supports
of any columns does not contain the support of any other column. In
principle, one wants to have such a matrix with minimum possible number of
rows (tests). One of the main ways of constructing disjunct matrices relies on
\emph{constant weight error-correcting codes} and their \emph{minimum
distance}. In this paper, we consider a relaxed definition of a disjunct matrix
known as \emph{almost disjunct matrix}. This concept is also studied under the
name of \emph{weakly separated design} in the literature. The relaxed
definition allows one to come up with group testing schemes where a
close-to-one fraction of all possible sets of defective items are identifiable.
Our main contribution is twofold. First, we go beyond the minimum distance
analysis and connect the \emph{average distance} of a constant weight code to
the parameters of an almost disjunct matrix constructed from it. Our second
contribution is to explicitly construct almost disjunct matrices based on our
average distance analysis, that have much smaller number of rows than any
previous explicit construction of disjunct matrices. The parameters of our
construction can be varied to cover a large range of relations for and .Comment: 15 Page
Generalised Pattern Matching Revisited
In the problem of
[STOC'94, Muthukrishnan and Palem], we are given a text of length over
an alphabet , a pattern of length over an alphabet
, and a matching relationship ,
and must return all substrings of that match (reporting) or the number
of mismatches between each substring of of length and (counting).
In this work, we improve over all previously known algorithms for this problem
for various parameters describing the input instance:
* being the maximum number of characters that match a fixed
character,
* being the number of pairs of matching characters,
* being the total number of disjoint intervals of characters
that match the characters of the pattern .
At the heart of our new deterministic upper bounds for and
lies a faster construction of superimposed codes, which solves
an open problem posed in [FOCS'97, Indyk] and can be of independent interest.
To conclude, we demonstrate first lower bounds for . We start by
showing that any deterministic or Monte Carlo algorithm for must
use time, and then proceed to show higher lower bounds
for combinatorial algorithms. These bounds show that our algorithms are almost
optimal, unless a radically new approach is developed
- …