30,118 research outputs found

    A single-photon sampling architecture for solid-state imaging

    Full text link
    Advances in solid-state technology have enabled the development of silicon photomultiplier sensor arrays capable of sensing individual photons. Combined with high-frequency time-to-digital converters (TDCs), this technology opens up the prospect of sensors capable of recording with high accuracy both the time and location of each detected photon. Such a capability could lead to significant improvements in imaging accuracy, especially for applications operating with low photon fluxes such as LiDAR and positron emission tomography. The demands placed on on-chip readout circuitry imposes stringent trade-offs between fill factor and spatio-temporal resolution, causing many contemporary designs to severely underutilize the technology's full potential. Concentrating on the low photon flux setting, this paper leverages results from group testing and proposes an architecture for a highly efficient readout of pixels using only a small number of TDCs, thereby also reducing both cost and power consumption. The design relies on a multiplexing technique based on binary interconnection matrices. We provide optimized instances of these matrices for various sensor parameters and give explicit upper and lower bounds on the number of TDCs required to uniquely decode a given maximum number of simultaneous photon arrivals. To illustrate the strength of the proposed architecture, we note a typical digitization result of a 120x120 photodiode sensor on a 30um x 30um pitch with a 40ps time resolution and an estimated fill factor of approximately 70%, using only 161 TDCs. The design guarantees registration and unique recovery of up to 4 simultaneous photon arrivals using a fast decoding algorithm. In a series of realistic simulations of scintillation events in clinical positron emission tomography the design was able to recover the spatio-temporal location of 98.6% of all photons that caused pixel firings.Comment: 24 pages, 3 figures, 5 table

    Noise-Resilient Group Testing: Limitations and Constructions

    Full text link
    We study combinatorial group testing schemes for learning dd-sparse Boolean vectors using highly unreliable disjunctive measurements. We consider an adversarial noise model that only limits the number of false observations, and show that any noise-resilient scheme in this model can only approximately reconstruct the sparse vector. On the positive side, we take this barrier to our advantage and show that approximate reconstruction (within a satisfactory degree of approximation) allows us to break the information theoretic lower bound of Ω~(d2logn)\tilde{\Omega}(d^2 \log n) that is known for exact reconstruction of dd-sparse vectors of length nn via non-adaptive measurements, by a multiplicative factor Ω~(d)\tilde{\Omega}(d). Specifically, we give simple randomized constructions of non-adaptive measurement schemes, with m=O(dlogn)m=O(d \log n) measurements, that allow efficient reconstruction of dd-sparse vectors up to O(d)O(d) false positives even in the presence of δm\delta m false positives and O(m/d)O(m/d) false negatives within the measurement outcomes, for any constant δ<1\delta < 1. We show that, information theoretically, none of these parameters can be substantially improved without dramatically affecting the others. Furthermore, we obtain several explicit constructions, in particular one matching the randomized trade-off but using m=O(d1+o(1)logn)m = O(d^{1+o(1)} \log n) measurements. We also obtain explicit constructions that allow fast reconstruction in time \poly(m), which would be sublinear in nn for sufficiently sparse vectors. The main tool used in our construction is the list-decoding view of randomness condensers and extractors.Comment: Full version. A preliminary summary of this work appears (under the same title) in proceedings of the 17th International Symposium on Fundamentals of Computation Theory (FCT 2009

    Cross-Sender Bit-Mixing Coding

    Full text link
    Scheduling to avoid packet collisions is a long-standing challenge in networking, and has become even trickier in wireless networks with multiple senders and multiple receivers. In fact, researchers have proved that even {\em perfect} scheduling can only achieve R=O(1lnN)\mathbf{R} = O(\frac{1}{\ln N}). Here NN is the number of nodes in the network, and R\mathbf{R} is the {\em medium utilization rate}. Ideally, one would hope to achieve R=Θ(1)\mathbf{R} = \Theta(1), while avoiding all the complexities in scheduling. To this end, this paper proposes {\em cross-sender bit-mixing coding} ({\em BMC}), which does not rely on scheduling. Instead, users transmit simultaneously on suitably-chosen slots, and the amount of overlap in different user's slots is controlled via coding. We prove that in all possible network topologies, using BMC enables us to achieve R=Θ(1)\mathbf{R}=\Theta(1). We also prove that the space and time complexities of BMC encoding/decoding are all low-order polynomials.Comment: Published in the International Conference on Information Processing in Sensor Networks (IPSN), 201

    Binding and Normalization of Binary Sparse Distributed Representations by Context-Dependent Thinning

    Get PDF
    Distributed representations were often criticized as inappropriate for encoding of data with a complex structure. However Plate's Holographic Reduced Representations and Kanerva's Binary Spatter Codes are recent schemes that allow on-the-fly encoding of nested compositional structures by real-valued or dense binary vectors of fixed dimensionality. In this paper we consider procedures of the Context-Dependent Thinning which were developed for representation of complex hierarchical items in the architecture of Associative-Projective Neural Networks. These procedures provide binding of items represented by sparse binary codevectors (with low probability of 1s). Such an encoding is biologically plausible and allows a high storage capacity of distributed associative memory where the codevectors may be stored. In contrast to known binding procedures, Context-Dependent Thinning preserves the same low density (or sparseness) of the bound codevector for varied number of component codevectors. Besides, a bound codevector is not only similar to another one with similar component codevectors (as in other schemes), but it is also similar to the component codevectors themselves. This allows the similarity of structures to be estimated just by the overlap of their codevectors, without retrieval of the component codevectors. This also allows an easy retrieval of the component codevectors. Examples of algorithmic and neural-network implementations of the thinning procedures are considered. We also present representation examples for various types of nested structured data (propositions using role-filler and predicate-arguments representation schemes, trees, directed acyclic graphs) using sparse codevectors of fixed dimension. Such representations may provide a fruitful alternative to the symbolic representations of traditional AI, as well as to the localist and microfeature-based connectionist representations

    Construction of Almost Disjunct Matrices for Group Testing

    Full text link
    In a \emph{group testing} scheme, a set of tests is designed to identify a small number tt of defective items among a large set (of size NN) of items. In the non-adaptive scenario the set of tests has to be designed in one-shot. In this setting, designing a testing scheme is equivalent to the construction of a \emph{disjunct matrix}, an M×NM \times N matrix where the union of supports of any tt columns does not contain the support of any other column. In principle, one wants to have such a matrix with minimum possible number MM of rows (tests). One of the main ways of constructing disjunct matrices relies on \emph{constant weight error-correcting codes} and their \emph{minimum distance}. In this paper, we consider a relaxed definition of a disjunct matrix known as \emph{almost disjunct matrix}. This concept is also studied under the name of \emph{weakly separated design} in the literature. The relaxed definition allows one to come up with group testing schemes where a close-to-one fraction of all possible sets of defective items are identifiable. Our main contribution is twofold. First, we go beyond the minimum distance analysis and connect the \emph{average distance} of a constant weight code to the parameters of an almost disjunct matrix constructed from it. Our second contribution is to explicitly construct almost disjunct matrices based on our average distance analysis, that have much smaller number of rows than any previous explicit construction of disjunct matrices. The parameters of our construction can be varied to cover a large range of relations for tt and NN.Comment: 15 Page

    Generalised Pattern Matching Revisited

    Get PDF
    In the problem of Generalised Pattern Matching (GPM)\texttt{Generalised Pattern Matching}\ (\texttt{GPM}) [STOC'94, Muthukrishnan and Palem], we are given a text TT of length nn over an alphabet ΣT\Sigma_T, a pattern PP of length mm over an alphabet ΣP\Sigma_P, and a matching relationship ΣT×ΣP\subseteq \Sigma_T \times \Sigma_P, and must return all substrings of TT that match PP (reporting) or the number of mismatches between each substring of TT of length mm and PP (counting). In this work, we improve over all previously known algorithms for this problem for various parameters describing the input instance: * D\mathcal{D}\, being the maximum number of characters that match a fixed character, * S\mathcal{S}\, being the number of pairs of matching characters, * I\mathcal{I}\, being the total number of disjoint intervals of characters that match the mm characters of the pattern PP. At the heart of our new deterministic upper bounds for D\mathcal{D}\, and S\mathcal{S}\, lies a faster construction of superimposed codes, which solves an open problem posed in [FOCS'97, Indyk] and can be of independent interest. To conclude, we demonstrate first lower bounds for GPM\texttt{GPM}. We start by showing that any deterministic or Monte Carlo algorithm for GPM\texttt{GPM} must use Ω(S)\Omega(\mathcal{S}) time, and then proceed to show higher lower bounds for combinatorial algorithms. These bounds show that our algorithms are almost optimal, unless a radically new approach is developed
    corecore