8,764 research outputs found

    A single-photon sampling architecture for solid-state imaging

    Full text link
    Advances in solid-state technology have enabled the development of silicon photomultiplier sensor arrays capable of sensing individual photons. Combined with high-frequency time-to-digital converters (TDCs), this technology opens up the prospect of sensors capable of recording with high accuracy both the time and location of each detected photon. Such a capability could lead to significant improvements in imaging accuracy, especially for applications operating with low photon fluxes such as LiDAR and positron emission tomography. The demands placed on on-chip readout circuitry imposes stringent trade-offs between fill factor and spatio-temporal resolution, causing many contemporary designs to severely underutilize the technology's full potential. Concentrating on the low photon flux setting, this paper leverages results from group testing and proposes an architecture for a highly efficient readout of pixels using only a small number of TDCs, thereby also reducing both cost and power consumption. The design relies on a multiplexing technique based on binary interconnection matrices. We provide optimized instances of these matrices for various sensor parameters and give explicit upper and lower bounds on the number of TDCs required to uniquely decode a given maximum number of simultaneous photon arrivals. To illustrate the strength of the proposed architecture, we note a typical digitization result of a 120x120 photodiode sensor on a 30um x 30um pitch with a 40ps time resolution and an estimated fill factor of approximately 70%, using only 161 TDCs. The design guarantees registration and unique recovery of up to 4 simultaneous photon arrivals using a fast decoding algorithm. In a series of realistic simulations of scintillation events in clinical positron emission tomography the design was able to recover the spatio-temporal location of 98.6% of all photons that caused pixel firings.Comment: 24 pages, 3 figures, 5 table

    Two-batch liar games on a general bounded channel

    Get PDF
    We consider an extension of the 2-person R\'enyi-Ulam liar game in which lies are governed by a channel CC, a set of allowable lie strings of maximum length kk. Carole selects x∈[n]x\in[n], and Paul makes tt-ary queries to uniquely determine xx. In each of qq rounds, Paul weakly partitions [n]=A0βˆͺ>...βˆͺAtβˆ’1[n]=A_0\cup >... \cup A_{t-1} and asks for aa such that x∈Aax\in A_a. Carole responds with some bb, and if aβ‰ ba\neq b, then xx accumulates a lie (a,b)(a,b). Carole's string of lies for xx must be in the channel CC. Paul wins if he determines xx within qq rounds. We further restrict Paul to ask his questions in two off-line batches. We show that for a range of sizes of the second batch, the maximum size of the search space [n][n] for which Paul can guarantee finding the distinguished element is ∼tq+k/(Ek(C)(qk))\sim t^{q+k}/(E_k(C)\binom{q}{k}) as qβ†’βˆžq\to\infty, where Ek(C)E_k(C) is the number of lie strings in CC of maximum length kk. This generalizes previous work of Dumitriu and Spencer, and of Ahlswede, Cicalese, and Deppe. We extend Paul's strategy to solve also the pathological liar variant, in a unified manner which gives the existence of asymptotically perfect two-batch adaptive codes for the channel CC.Comment: 26 page
    • …
    corecore