351 research outputs found

    High linearity SPAD and TDC array for TCSPC and 3D ranging applications

    Get PDF
    An array of 32x32 Single-Photon Avalanche-Diodes (SPADs) and Time-to-Digital Converters (TDCs) has been fabricated in a 0.35 mu m automotive-certified CMOS technology. The overall dimension of the chip is 9x9 mm(2). Each pixel is able to detect photons in the 300 nm - 900 nm wavelength range with a fill-factor of 3.14% and either to count them or to time stamp their arrival time. In photon-counting mode an in-pixel 6-bit counter provides photon-number-resolved intensity movies at 100 kfps, whereas in photon-timing mode the 10-bit in-pixel TDC provides time-resolved maps (Time-Correlated Single-Photon Counting measurements) or 3D depth-resolved (through direct time-of-flight technique) images and movies, with 312 ps resolution. The photodetector is a 30 mu m diameter SPAD with low Dark Count Rate (120 cps at room temperature, 3% hot-pixels) and 55% peak Photon Detection Efficiency (PDE) at 450 nm. The TDC has a 6-bit counter and a 4-bit fine interpolator, based on a Delay Locked Loop (DLL) line, which makes the TDC insensitive to process, voltage, and temperature drifts. The implemented sliding-scale technique improves linearity, giving 2% LSB DNL and 10% LSB INL. The single-shot precision is 260 ps rms, comprising SPAD, TDC and driving board jitter. Both optical and electrical crosstalk among SPADs and TDCs are negligible. 2D fast movies and 3D reconstructions with centimeter resolution are reported

    Time-to-digital converters and histogram builders in SPAD arrays for pulsed-LiDAR

    Get PDF
    Light Detection and Ranging (LiDAR) is a 3D imaging technique widely used in many applications such as augmented reality, automotive, machine vision, spacecraft navigation and landing. Pulsed-LiDAR is one of the most diffused LiDAR techniques which relies on the measurement of the round-trip travel time of an optical pulse back-scattered from a distant target. Besides the light source and the detector, Time-to-Digital Converters (TDCs) are fundamental components in pulsed-LiDAR systems, since they allow to measure the back-scattered photon arrival times and their performance directly impact on LiDAR system requirements (i.e., range, precision, and measurements rate). In this work, we present a review of recent TDC architectures suitable to be integrated in SPAD-based CMOS arrays and a review of data processing solutions to derive the TOF information. Furthermore, main TDC parameters and processing techniques are described and analyzed considering pulsed-LiDAR requirements

    Advanced photon counting techniques for long-range depth imaging

    Get PDF
    The Time-Correlated Single-Photon Counting (TCSPC) technique has emerged as a candidate approach for Light Detection and Ranging (LiDAR) and active depth imaging applications. The work of this Thesis concentrates on the development and investigation of functional TCSPC-based long-range scanning time-of-flight (TOF) depth imaging systems. Although these systems have several different configurations and functions, all can facilitate depth profiling of remote targets at low light levels and with good surface-to-surface depth resolution. Firstly, a Superconducting Nanowire Single-Photon Detector (SNSPD) and an InGaAs/InP Single-Photon Avalanche Diode (SPAD) module were employed for developing kilometre-range TOF depth imaging systems at wavelengths of ~1550 nm. Secondly, a TOF depth imaging system at a wavelength of 817 nm that incorporated a Complementary Metal-Oxide-Semiconductor (CMOS) 32×32 Si-SPAD detector array was developed. This system was used with structured illumination to examine the potential for covert, eye-safe and high-speed depth imaging. In order to improve the light coupling efficiency onto the detectors, the arrayed CMOS Si-SPAD detector chips were integrated with microlens arrays using flip-chip bonding technology. This approach led to the improvement in the fill factor by up to a factor of 15. Thirdly, a multispectral TCSPC-based full-waveform LiDAR system was developed using a tunable broadband pulsed supercontinuum laser source which can provide simultaneous multispectral illumination, at wavelengths of 531, 570, 670 and ~780 nm. The investigated multispectral reflectance data on a tree was used to provide the determination of physiological parameters as a function of the tree depth profile relating to biomass and foliage photosynthetic efficiency. Fourthly, depth images were estimated using spatial correlation techniques in order to reduce the aggregate number of photon required for depth reconstruction with low error. A depth imaging system was characterised and re-configured to reduce the effects of scintillation due to atmospheric turbulence. In addition, depth images were analysed in terms of spatial and depth resolution

    3D LIDAR imaging using Ge-on-Si single–photon avalanche diode detectors

    Get PDF
    We present a scanning light detection and ranging (LIDAR) system incorporating an individual Ge-on-Si single-photon avalanche diode (SPAD) detector for depth and intensity imaging in the short-wavelength infrared region. The time-correlated single-photon counting technique was used to determine the return photon time-of-flight for target depth information. In laboratory demonstrations, depth and intensity reconstructions were made of targets at short range, using advanced image processing algorithms tailored for the analysis of single–photon time-of-flight data. These laboratory measurements were used to predict the performance of the single-photon LIDAR system at longer ranges, providing estimations that sub-milliwatt average power levels would be required for kilometer range depth measurements

    High-speed object detection with a single-photon time-of-flight image sensor

    Get PDF
    3D time-of-flight (ToF) imaging is used in a variety of applications such as augmented reality (AR), computer interfaces, robotics and autonomous systems. Single-photon avalanche diodes (SPADs) are one of the enabling technologies providing accurate depth data even over long ranges. By developing SPADs in array format with integrated processing combined with pulsed, flood-type illumination, high-speed 3D capture is possible. However, array sizes tend to be relatively small, limiting the lateral resolution of the resulting depth maps, and, consequently, the information that can be extracted from the image for applications such as object detection. In this paper, we demonstrate that these limitations can be overcome through the use of convolutional neural networks (CNNs) for high-performance object detection. We present outdoor results from a portable SPAD camera system that outputs 16-bin photon timing histograms with 64x32 spatial resolution. The results, obtained with exposure times down to 2 ms (equivalent to 500 FPS) and in signal-to-background (SBR) ratios as low as 0.05, point to the advantages of providing the CNN with full histogram data rather than point clouds alone. Alternatively, a combination of point cloud and active intensity data may be used as input, for a similar level of performance. In either case, the GPU-accelerated processing time is less than 1 ms per frame, leading to an overall latency (image acquisition plus processing) in the millisecond range, making the results relevant for safety-critical computer vision applications which would benefit from faster than human reaction times.Comment: 13 pages, 5 figures, 3 table

    Feasibility of Geiger-mode avalanche photodiodes in CMOS standard technologies for tracker detectors

    Get PDF
    The next generation of particle colliders will be characterized by linear lepton colliders, where the collisions between electrons and positrons will allow to study in great detail the new particle discovered at CERN in 2012 (presumably the Higgs boson). At present time, there are two alternative projects underway, namely the ILC (International Linear Collider) and CLIC (Compact LInear Collider). From the detector point of view, the physics aims at these particle colliders impose such extreme requirements, that there is no sensor technology available in the market that can fulfill all of them. As a result, several new detector systems are being developed in parallel with the accelerator. This thesis presents the development of a GAPD (Geiger-mode Avalanche PhotoDiode) pixel detector aimed mostly at particle tracking at future linear colliders. GAPDs offer outstanding qualities to meet the challenging requirements of ILC and CLIC, such as an extraordinary high sensitivity, virtually infinite gain and ultra-fast response time, apart from compatibility with standard CMOS technologies. In particular, GAPD detectors enable the direct conversion of a single particle event onto a CMOS digital pulse in the sub-nanosecond time scale without the utilization of either preamplifiers or pulse shapers. As a result, GAPDs can be read out after each single bunch crossing, a unique quality that none of its competitors can offer at the moment. In spite of all these advantages, GAPD detectors suffer from two main problems. On the one side, there exist noise phenomena inherent to the sensor, which induce noise pulses that cannot be distinguished from real particle events and also worsen the detector occupancy to unacceptable levels. On the other side, the fill-factor is too low and gives rise to a reduced detection efficiency. Solutions to the two problems commented that are compliant with the severe specifications of the next generation of particle colliders have been thoroughly investigated. The design and characterization of several single pixels and small arrays that incorporate some elements to reduce the intrinsic noise generated by the sensor are presented. The sensors and the readout circuits have been monolithically integrated in a conventional HV-CMOS 0.35 μm process. Concerning the readout circuits, both voltage-mode and current-mode options have been considered. Moreover, the time-gated operation has also been explored as an alternative to reduce the detected sensor noise. The design and thorough characterization of a prototype GAPD array, also monolithically integrated in a conventional 0.35 μm HV-CMOS process, is presented in the thesis as well. The detector consists of 10 rows x 43 columns of pixels, with a total sensitive area of 1 mm x 1 mm. The array is operated in a time-gated mode and read out sequentially by rows. The efficiency of the proposed technique to reduce the detected noise is shown with a wide variety of measurements. Further improved results are obtained with the reduction of the working temperature. Finally, the suitability of the proposed detector array for particle detection is shown with the results of a beam-test campaign conducted at CERN-SPS (European Organization for Nuclear Research-Super Proton Synchrotron). Apart from that, a series of additional approaches to improve the performance of the GAPD technology are proposed. The benefits of integrating a GAPD pixel array in a 3D process in terms of overcoming the fill-factor limitation are examined first. The design of a GAPD detector in the Global Foundries 130 nm/Tezzaron 3D process is also presented. Moreover, the possibility to obtain better results in light detection applications by means of the time-gated operation or correction techniques is analyzed too.Aquesta tesi presenta el desenvolupament d’un detector de píxels de GAPDs (Geiger-mode Avalanche PhotoDiodes) dedicat principalment a rastrejar partícules en futurs col•lisionadors lineals. Els GAPDs ofereixen unes qualitats extraordinàries per satisfer els requisits extremadament exigents d’ILC (International Linear Collider) i CLIC (Compact LInear Collider), els dos projectes per la propera generació de col•lisionadors que s’han proposat fins a dia d’avui. Entre aquestes qualitats es troben una sensibilitat extremadament elevada, un guany virtualment infinit i una resposta molt ràpida, a part de ser compatibles amb les tecnologies CMOS estàndard. En concret, els detectors de GAPDs fan possible la conversió directa d’un esdeveniment generat per una sola partícula en un senyal CMOS digital amb un temps inferior al nanosegon. Com a resultat d’aquest fet, els GAPDs poden ser llegits després de cada bunch crossing (la col•lisió de les partícules), una qualitat única que cap dels seus competidors pot oferir en el moment actual. Malgrat tots aquests avantatges, els detectors de GAPDs pateixen dos grans problemes. D’una banda, existeixen fenòmens de soroll inherents al sensor, els quals indueixen polsos de soroll que no poden ser distingits dels esdeveniments reals generats per partícules i que a més empitjoren l’ocupació del detector a nivells inacceptables. D’altra banda, el fill-factor (és a dir, l’àrea sensible respecte l’àrea total) és molt baix i redueix l’eficiència detectora. En aquesta tesi s’han investigat solucions als dos problemes comentats i que a més compleixen amb les especificacions altament severes dels futurs col•lisionadors lineals. El detector de píxels de GAPDs, el qual ha estat monolíticament integrat en un procés HV-CMOS estàndard de 0.35 μm, incorpora circuits de lectura en mode voltatge que permeten operar el sensor en l’anomenat mode time-gated per tal de reduir el soroll detectat. L’eficiència de la tècnica proposada queda demostrada amb la gran varietat d’experiments que s’han dut a terme. Els resultats del beam-test dut a terme al CERN indiquen la capacitat del detector de píxels de GAPDs per detectar partícules altament energètiques. A banda d’això, també s’han estudiat els beneficis d’integrar un detector de píxels de GAPDs en un procés 3D per tal d’incrementar el fill-factor. L’anàlisi realitzat conclou que es poden assolir fill-factors superiors al 90%

    Development of a miniaturised particle radiation monitor for Earth orbit

    No full text
    Geometry and algorithm design for a novel highly miniaturised radiation monitor (HMRM) for spacecraft in medium Earth orbit are presented. The HMRM device comprises a telescopic configuration of application-specific active pixel sensors enclosed in a titanium shield, with an estimated total mass of 52 g and volume of 15 cm3. The monitor is intended to provide real-time dosimetry and identification of energetic charged particles in fluxes of up to 108 cm-2 s-1 (omnidirectional). Achieving this capability with such a small instrument could open new prospects for radiation detection in space. The methodology followed for the design and optimisation of the particle detector geometry is explained and analysis algorithms - for real-time use within the monitor and for post-processing reconstruction of spectra - are presented. Simulations with the Geant4 toolkit are used to predict operational results in various Earth orbits. Early test results of a prototype monitor, including calibration of the pixel sensors, are also reported.Open Acces

    An overview of depth cameras and range scanners based on time-of-flight technologies

    Get PDF
    “The final publication is available at Springer via http://dx.doi.10.1007/s00138-016-0784-4.This work has received funding from the French Agence Nationale de la Recherche (ANR) under the MIXCAM project ANR-13-BS02-0010-01, and from the European Research Council (ERC) under the Advanced Grant VHIA Project 340113

    CMOS Sensors for Time-Resolved Active Imaging

    Full text link
    In the past decades, time-resolved imaging such as fluorescence lifetime or time-of-flight depth imaging has been extensively explored in biomedical and industrial fields because of its non-invasive characterization of material properties and remote sensing capability. Many studies have shown its potential and effectiveness in applications such as cancer detection and tissue diagnoses from fluorescence lifetime imaging, and gesture/motion sensing and geometry sensing from time-of-flight imaging. Nonetheless, time-resolved imaging has not been widely adopted due to the high cost of the system and performance limits. The research presented in this thesis focuses on the implementation of low-cost real-time time-resolved imaging systems. Two image sensing schemes are proposed and implemented to address the major limitations. First, we propose a single-shot fluorescence lifetime image sensors for high speed and high accuracy imaging. To achieve high accuracy, previous approaches repeat the measurement for multiple sampling, resulting in long measurement time. On the other hand, the proposed method achieves both high speed and accuracy at the same time by employing a pixel-level processor that takes and compresses the multiple samples within a single measurement time. The pixels in the sensor take multiple samples from the fluorescent optical signal in sub-nanosecond resolution and compute the average photon arrival time of the optical signal. Thanks to the multiple sampling of the signal, the measurement is insensitive to the shape or the pulse-width of excitation, providing better accuracy and pixel uniformity than conventional rapid lifetime determination (RLD) methods. The proposed single-shot image sensor also improves the imaging speed by orders of magnitude compared to other conventional center-of-mass methods (CMM). Second, we propose a 3-D camera with a background light suppression scheme which is adaptable to various lighting conditions. Previous 3-D cameras are not operable in outdoor conditions because they suffer from measurement errors and saturation problems under high background light illumination. We propose a reconfigurable architecture with column-parallel discrete-time background light cancellation circuit. Implementing the processor at the column level allows an order of magnitude reduction in pixel size as compared to existing pixel-level processors. The column-level approach also provides reconfigurable operation modes for optimal performance in all lighting conditions. For example, the sensor can operate at the best frame-rate and resolution without the presence of background light. If the background light saturates the sensor or increases the shot noise, the sensor can adjust the resolution and frame-rate by pixel binning and superresolution techniques. This effectively enhances the well capacity of the pixel to compensate for the increase shot noise, and speeds up the frame processing to handle the excessive background light. A fabricated prototype sensor can suppress the background light more than 100-klx while achieving a very small pixel size of 5.9μm.PHDElectrical EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttps://deepblue.lib.umich.edu/bitstream/2027.42/136950/1/eecho_1.pd

    The SuperCam Instrument Suite on the Mars 2020 Rover: Science Objectives and Mast-Unit Description

    Get PDF
    On the NASA 2020 rover mission to Jezero crater, the remote determination of the texture, mineralogy and chemistry of rocks is essential to quickly and thoroughly characterize an area and to optimize the selection of samples for return to Earth. As part of the Perseverance payload, SuperCam is a suite of five techniques that provide critical and complementary observations via Laser-Induced Breakdown Spectroscopy (LIBS), Time-Resolved Raman and Luminescence (TRR/L), visible and near-infrared spectroscopy (VISIR), high-resolution color imaging (RMI), and acoustic recording (MIC). SuperCam operates at remote distances, primarily 2-7 m, while providing data at sub-mm to mm scales. We report on SuperCam's science objectives in the context of the Mars 2020 mission goals and ways the different techniques can address these questions. The instrument is made up of three separate subsystems: the Mast Unit is designed and built in France; the Body Unit is provided by the United States; the calibration target holder is contributed by Spain, and the targets themselves by the entire science team. This publication focuses on the design, development, and tests of the Mast Unit; companion papers describe the other units. The goal of this work is to provide an understanding of the technical choices made, the constraints that were imposed, and ultimately the validated performance of the flight model as it leaves Earth, and it will serve as the foundation for Mars operations and future processing of the data.In France was provided by the Centre National d'Etudes Spatiales (CNES). Human resources were provided in part by the Centre National de la Recherche Scientifique (CNRS) and universities. Funding was provided in the US by NASA's Mars Exploration Program. Some funding of data analyses at Los Alamos National Laboratory (LANL) was provided by laboratory-directed research and development funds
    • …
    corecore