13 research outputs found

    Miniature high dynamic range time-resolved CMOS SPAD image sensors

    Get PDF
    Since their integration in complementary metal oxide (CMOS) semiconductor technology in 2003, single photon avalanche diodes (SPADs) have inspired a new era of low cost high integration quantum-level image sensors. Their unique feature of discerning single photon detections, their ability to retain temporal information on every collected photon and their amenability to high speed image sensor architectures makes them prime candidates for low light and time-resolved applications. From the biomedical field of fluorescence lifetime imaging microscopy (FLIM) to extreme physical phenomena such as quantum entanglement, all the way to time of flight (ToF) consumer applications such as gesture recognition and more recently automotive light detection and ranging (LIDAR), huge steps in detector and sensor architectures have been made to address the design challenges of pixel sensitivity and functionality trade-off, scalability and handling of large data rates. The goal of this research is to explore the hypothesis that given the state of the art CMOS nodes and fabrication technologies, it is possible to design miniature SPAD image sensors for time-resolved applications with a small pixel pitch while maintaining both sensitivity and built -in functionality. Three key approaches are pursued to that purpose: leveraging the innate area reduction of logic gates and finer design rules of advanced CMOS nodes to balance the pixel’s fill factor and processing capability, smarter pixel designs with configurable functionality and novel system architectures that lift the processing burden off the pixel array and mediate data flow. Two pathfinder SPAD image sensors were designed and fabricated: a 96 × 40 planar front side illuminated (FSI) sensor with 66% fill factor at 8.25μm pixel pitch in an industrialised 40nm process and a 128 × 120 3D-stacked backside illuminated (BSI) sensor with 45% fill factor at 7.83μm pixel pitch. Both designs rely on a digital, configurable, 12-bit ripple counter pixel allowing for time-gated shot noise limited photon counting. The FSI sensor was operated as a quanta image sensor (QIS) achieving an extended dynamic range in excess of 100dB, utilising triple exposure windows and in-pixel data compression which reduces data rates by a factor of 3.75×. The stacked sensor is the first demonstration of a wafer scale SPAD imaging array with a 1-to-1 hybrid bond connection. Characterisation results of the detector and sensor performance are presented. Two other time-resolved 3D-stacked BSI SPAD image sensor architectures are proposed. The first is a fully integrated 5-wire interface system on chip (SoC), with built-in power management and off-focal plane data processing and storage for high dynamic range as well as autonomous video rate operation. Preliminary images and bring-up results of the fabricated 2mm² sensor are shown. The second is a highly configurable design capable of simultaneous multi-bit oversampled imaging and programmable region of interest (ROI) time correlated single photon counting (TCSPC) with on-chip histogram generation. The 6.48μm pitch array has been submitted for fabrication. In-depth design details of both architectures are discussed

    Range-finding SPAD array with smart laser-spot tracking and TDC sharing for background suppression

    Get PDF
    We present the design and experimental characterization of a CMOS sensor based on Single-Photon Avalanche Diodes for direct Time-Of-Flight single-point distance ranging, under high background illumination for short-range applications. The sensing area has a rectangular shape (40 W 10 SPADs) to deal with the backscattered light spot displacement across the detector, dependent on target distance, due to the non-confocal optical setup. Since only few SPADs are illuminated by the laser spot, we implemented a smart laser-spot tracking within the active area, so to define the specific Region-Of-Interest (ROI) with only SPADs hit by signal photons and a smart sharing of the timing electronics, so to significantly improve Signal-to-Noise Ratio (SNR) of TOF measurements and to reduce overall chip area and power consumption. The timing electronics consists of 80 Time-to-Digital Converter (TDC) shared among the 400 SPADs with a self-reconfigurable routing, which dynamically connects the SPADs within the ROI to the available TDCs. The latter have 78 ps resolution and 20 ns Full-Scale Range (FSR), i.e., up to 2 m maximum distance range. An on-chip histogram builder block accumulates TDC conversions so to provide the final TOF histogram. We achieve a precision better than 2.3 mm at 1 m distance and 80% target reflectivity, with 3 klux halogen lamp background illumination and 2 kHz measurement rate. The sensor rejects 10 klux of background light, still with a precision better than 20 mm at 2 m

    Robust super-resolution depth imaging via a multi-feature fusion deep network

    Get PDF
    Three-dimensional imaging plays an important role in imaging applications where it is necessary to record depth. The number of applications that use depth imaging is increasing rapidly, and examples include self-driving autonomous vehicles and auto-focus assist on smartphone cameras. Light detection and ranging (LIDAR) via single-photon sensitive detector (SPAD) arrays is an emerging technology that enables the acquisition of depth images at high frame rates. However, the spatial resolution of this technology is typically low in comparison to the intensity images recorded by conventional cameras. To increase the native resolution of depth images from a SPAD camera, we develop a deep network built specifically to take advantage of the multiple features that can be extracted from a camera's histogram data. The network is designed for a SPAD camera operating in a dual-mode such that it captures alternate low resolution depth and high resolution intensity images at high frame rates, thus the system does not require any additional sensor to provide intensity images. The network then uses the intensity images and multiple features extracted from downsampled histograms to guide the upsampling of the depth. Our network provides significant image resolution enhancement and image denoising across a wide range of signal-to-noise ratios and photon levels. We apply the network to a range of 3D data, demonstrating denoising and a four-fold resolution enhancement of depth

    Imaging and certifying high-dimensional entanglement with a single-photon avalanche diode camera

    Get PDF
    Spatial correlations between two photons are the key resource in realising many quantum imaging schemes. Measurement of the bi-photon correlation map is typically performed using single-point scanning detectors or single-photon cameras based on charged coupled device (CCD) technology. However, both approaches are limited in speed due to the slow scanning and the low frame rate of CCD-based cameras, resulting in data acquisition times on the order of many hours. Here, we employ a high frame rate, single-photon avalanche diode (SPAD) camera, to measure the spatial joint probability distribution of a bi-photon state produced by spontaneous parametric down-conversion, with statistics taken over 107 frames. Through violation of an Einstein–Podolsky–Rosen criterion by 227 sigmas, we confirm the presence of spatial entanglement between our photon pairs. Furthermore, we certify, in just 140 s, an entanglement dimensionality of 48. Our work demonstrates the potential of SPAD cameras in the rapid characterisation of photonic entanglement, leading the way towards real-time quantum imaging and quantum information processing
    corecore