1,897 research outputs found

    A single-photon sampling architecture for solid-state imaging

    Full text link
    Advances in solid-state technology have enabled the development of silicon photomultiplier sensor arrays capable of sensing individual photons. Combined with high-frequency time-to-digital converters (TDCs), this technology opens up the prospect of sensors capable of recording with high accuracy both the time and location of each detected photon. Such a capability could lead to significant improvements in imaging accuracy, especially for applications operating with low photon fluxes such as LiDAR and positron emission tomography. The demands placed on on-chip readout circuitry imposes stringent trade-offs between fill factor and spatio-temporal resolution, causing many contemporary designs to severely underutilize the technology's full potential. Concentrating on the low photon flux setting, this paper leverages results from group testing and proposes an architecture for a highly efficient readout of pixels using only a small number of TDCs, thereby also reducing both cost and power consumption. The design relies on a multiplexing technique based on binary interconnection matrices. We provide optimized instances of these matrices for various sensor parameters and give explicit upper and lower bounds on the number of TDCs required to uniquely decode a given maximum number of simultaneous photon arrivals. To illustrate the strength of the proposed architecture, we note a typical digitization result of a 120x120 photodiode sensor on a 30um x 30um pitch with a 40ps time resolution and an estimated fill factor of approximately 70%, using only 161 TDCs. The design guarantees registration and unique recovery of up to 4 simultaneous photon arrivals using a fast decoding algorithm. In a series of realistic simulations of scintillation events in clinical positron emission tomography the design was able to recover the spatio-temporal location of 98.6% of all photons that caused pixel firings.Comment: 24 pages, 3 figures, 5 table

    Lower bounds for identifying subset members with subset queries

    Full text link
    An instance of a group testing problem is a set of objects \cO and an unknown subset PP of \cO. The task is to determine PP by using queries of the type ``does PP intersect QQ'', where QQ is a subset of \cO. This problem occurs in areas such as fault detection, multiaccess communications, optimal search, blood testing and chromosome mapping. Consider the two stage algorithm for solving a group testing problem. In the first stage a predetermined set of queries are asked in parallel and in the second stage, PP is determined by testing individual objects. Let n=\cardof{\cO}. Suppose that PP is generated by independently adding each x\in \cO to PP with probability p/np/n. Let q1q_1 (q2q_2) be the number of queries asked in the first (second) stage of this algorithm. We show that if q1=o(log(n)log(n)/loglog(n))q_1=o(\log(n)\log(n)/\log\log(n)), then \Exp(q_2) = n^{1-o(1)}, while there exist algorithms with q1=O(log(n)log(n)/loglog(n))q_1 = O(\log(n)\log(n)/\log\log(n)) and \Exp(q_2) = o(1). The proof involves a relaxation technique which can be used with arbitrary distributions. The best previously known bound is q_1+\Exp(q_2) = \Omega(p\log(n)). For general group testing algorithms, our results imply that if the average number of queries over the course of nγn^\gamma (γ>0\gamma>0) independent experiments is O(n1ϵ)O(n^{1-\epsilon}), then with high probability Ω(log(n)log(n)/loglog(n))\Omega(\log(n)\log(n)/\log\log(n)) non-singleton subsets are queried. This settles a conjecture of Bill Bruno and David Torney and has important consequences for the use of group testing in screening DNA libraries and other applications where it is more cost effective to use non-adaptive algorithms and/or too expensive to prepare a subset QQ for its first test.Comment: 9 page

    Compressed Genotyping

    Full text link
    Significant volumes of knowledge have been accumulated in recent years linking subtle genetic variations to a wide variety of medical disorders from Cystic Fibrosis to mental retardation. Nevertheless, there are still great challenges in applying this knowledge routinely in the clinic, largely due to the relatively tedious and expensive process of DNA sequencing. Since the genetic polymorphisms that underlie these disorders are relatively rare in the human population, the presence or absence of a disease-linked polymorphism can be thought of as a sparse signal. Using methods and ideas from compressed sensing and group testing, we have developed a cost-effective genotyping protocol. In particular, we have adapted our scheme to a recently developed class of high throughput DNA sequencing technologies, and assembled a mathematical framework that has some important distinctions from 'traditional' compressed sensing ideas in order to address different biological and technical constraints.Comment: Submitted to IEEE Transaction on Information Theory - Special Issue on Molecular Biology and Neuroscienc

    Design of Geometric Molecular Bonds

    Full text link
    An example of a nonspecific molecular bond is the affinity of any positive charge for any negative charge (like-unlike), or of nonpolar material for itself when in aqueous solution (like-like). This contrasts specific bonds such as the affinity of the DNA base A for T, but not for C, G, or another A. Recent experimental breakthroughs in DNA nanotechnology demonstrate that a particular nonspecific like-like bond ("blunt-end DNA stacking" that occurs between the ends of any pair of DNA double-helices) can be used to create specific "macrobonds" by careful geometric arrangement of many nonspecific blunt ends, motivating the need for sets of macrobonds that are orthogonal: two macrobonds not intended to bind should have relatively low binding strength, even when misaligned. To address this need, we introduce geometric orthogonal codes that abstractly model the engineered DNA macrobonds as two-dimensional binary codewords. While motivated by completely different applications, geometric orthogonal codes share similar features to the optical orthogonal codes studied by Chung, Salehi, and Wei. The main technical difference is the importance of 2D geometry in defining codeword orthogonality.Comment: Accepted to appear in IEEE Transactions on Molecular, Biological, and Multi-Scale Communication

    Code Construction and Decoding Algorithms for Semi-Quantitative Group Testing with Nonuniform Thresholds

    Full text link
    We analyze a new group testing scheme, termed semi-quantitative group testing, which may be viewed as a concatenation of an adder channel and a discrete quantizer. Our focus is on non-uniform quantizers with arbitrary thresholds. For the most general semi-quantitative group testing model, we define three new families of sequences capturing the constraints on the code design imposed by the choice of the thresholds. The sequences represent extensions and generalizations of Bh and certain types of super-increasing and lexicographically ordered sequences, and they lead to code structures amenable for efficient recursive decoding. We describe the decoding methods and provide an accompanying computational complexity and performance analysis
    corecore