9 research outputs found

    A 180-nm CMOS Time-of-Flight 3-D Image Sensor

    Get PDF
    Abstract-We report on the design and the experimental characterization of a new 3-D image sensor, based on a new 120-nm CMOS-compatible photo-detector, which features an internal demodulation mechanism effective up to high frequencies. The distance range covered by our proof-of-concept device spans from 1-m to a few meter, and the resolution is about 1-cm

    A CMOS Indirect Time-of-Flight Sensor with Pull-and-Split Charge Transfer Pixel Structure for High Depth Resolution

    Get PDF
    Department of Electrical EngineeringIn recent years, as the demand of various vision applications including robot vision and VR/AR systems increases, many Time-of-Flight (TOF) sensors are developed and commercialized which can measure depth not only with cm to mm-scale depth resolution in a few meters range, but also maximized target range over 100m. As the history of color image sensor has shown the direction, TOF sensors??? development would be continued towards pixel scaling to have spatial resolution while keeping the depth resolution and the cost. Then, what kind of applications can be possible through 3D imaging? First, an automotive application can be an example. Imagine the situation of driving a car in a dark environment, and we almost can???t see the people around. With 3D imaging technology, the depth perception is possible in the dark, as well as the detection of an object. The next one is the user interface. There are many SF movies with scenes that the actor controls the device or computer with his or her gestures, not using a conventional keyboard, or mouse. This can be possible with the person???s gesture recognition through 3D imaging. Last one is the robot vision. With the sensors on top of the robot, the depth recognition can help the machines to interact with environments around it. For example, the robot can perceive the obstacles around it, and they reach the destination without colliding other objects, such as bookshelf, tables, or people moving around. To realize these applications, there are many 3D imaging techniques. Triangulation, time of flight, and interferometry can be an example. Triangulation is roughly composed of two categories; one is structured light and the other is stereo vision. Structured light shoots the patterned light, and the reflected light is received onto two or more sensors, and the pattern???s difference among them gives the depth output. The stereo vision works as same as human eyes. This can also detect depth through the difference among sensors??? output through the calculation of trigonometric functions, except this one does not use the active light. There are two big disadvantages of the triangulation technique; one is the number of sensors, which needs at least 2 sensors, and the other one is the computational power, which is higher than other methods. The next one is the time-of-flight technique. The time-of-flight basically measures the round-trip time of the illuminated light until it comes into the sensor. As the name states, the DTOF measures the time directly, and ITOF measures the phase delay. DTOF uses a single photon avalanche diode (SPAD) for its photonic device, and ITOF can use a photogate [1], pinned photodiode, and current-assisted photonic demodulator (CAPD). The TOF technique???s strength is that it needs only 1 sensor for depth detection. Due to the strength, this research is focused on ITOF method with pinning photodiode structure. The interferometry detects distance using optical coherence of the illuminated light and reflected light, thus micrometer to nanometer scaled depth detection can be done, but the maximum distance is too short, and the system is too bulky, so it is used for scientific use. Various factors affect Indirect Time-of-Flight (I-TOF) sensor???s depth resolution performance. From the pixel???s perspective, Quantum Efficiency (QE), and modulation frequency are the most critical factors among them. Since most of the TOF system uses near-infrared (NIR) region as its light source for end users??? eye convenience, silicon???s low QE in this wavelength can cause a low photo-generated signal comparing to emitted light power [2], resulting low depth resolution. Thick epi-layer can solve this problem, however, there is a trade-off between the epi-layer thickness and the maximum modulation frequency [3]. It is known that the depth resolution becomes favorable as the modulation frequency increases since the I-TOF sensor detects the depth result from the phase difference between the emitted light and the received light, and speed of light is always fixed. However, epi-thickness limits maximum modulation frequency since the electrons must be transferred through longer distance to reach the designated FD within the limited time interval to detect the phase difference caused by the target object???s actual distance from the sensor. In this research, the proposed pixel has grab-and-split charge transfer using different n-doping profile inside pinned photodiode (PPD) which generates lateral electric field to push the photo-generated electrons towards designated FD. The unit PPD is composed of two regions, one is lightly doped with Arsenic which has low pinning potential, and the other is highly doped near the FD to have higher pinning potential, therefore, lateral electric field can be generated. The lightly doped area is designed to have wide area to get a high fill factor, and the highly doped area is designed to have minimal distance between two TX gates. The two TX gates??? distance is carefully designed to have proper lateral e-field through TCAD simulation. Through this structure, the electrons are transferred towards the highly doped area via pinning potential gradient, and the TX gates transfer electrons inside highly doped area. Unit pixel pitch is 14.4um with the fill factor of about 48% without a microlens. It is composed of 8 PPDs to reduce the electrons??? lateral travel distance. This chip is fabricated with a 0.11um CIS process with the minimal change from conventional CMOS Image Sensor technology. In addition, the implementation and measurement of I-ToF sensor are reported with different parameters such as epi-layer thickness, different PPD structures, and PPD???s n-dose for performance comparison.clos

    CMOS SPAD-based image sensor for single photon counting and time of flight imaging

    Get PDF
    The facility to capture the arrival of a single photon, is the fundamental limit to the detection of quantised electromagnetic radiation. An image sensor capable of capturing a picture with this ultimate optical and temporal precision is the pinnacle of photo-sensing. The creation of high spatial resolution, single photon sensitive, and time-resolved image sensors in complementary metal oxide semiconductor (CMOS) technology offers numerous benefits in a wide field of applications. These CMOS devices will be suitable to replace high sensitivity charge-coupled device (CCD) technology (electron-multiplied or electron bombarded) with significantly lower cost and comparable performance in low light or high speed scenarios. For example, with temporal resolution in the order of nano and picoseconds, detailed three-dimensional (3D) pictures can be formed by measuring the time of flight (TOF) of a light pulse. High frame rate imaging of single photons can yield new capabilities in super-resolution microscopy. Also, the imaging of quantum effects such as the entanglement of photons may be realised. The goal of this research project is the development of such an image sensor by exploiting single photon avalanche diodes (SPAD) in advanced imaging-specific 130nm front side illuminated (FSI) CMOS technology. SPADs have three key combined advantages over other imaging technologies: single photon sensitivity, picosecond temporal resolution and the facility to be integrated in standard CMOS technology. Analogue techniques are employed to create an efficient and compact imager that is scalable to mega-pixel arrays. A SPAD-based image sensor is described with 320 by 240 pixels at a pitch of 8μm and an optical efficiency or fill-factor of 26.8%. Each pixel comprises a SPAD with a hybrid analogue counting and memory circuit that makes novel use of a low-power charge transfer amplifier. Global shutter single photon counting images are captured. These exhibit photon shot noise limited statistics with unprecedented low input-referred noise at an equivalent of 0.06 electrons. The CMOS image sensor (CIS) trends of shrinking pixels, increasing array sizes, decreasing read noise, fast readout and oversampled image formation are projected towards the formation of binary single photon imagers or quanta image sensors (QIS). In a binary digital image capture mode, the image sensor offers a look-ahead to the properties and performance of future QISs with 20,000 binary frames per second readout with a bit error rate of 1.7 x 10-3. The bit density, or cumulative binary intensity, against exposure performance of this image sensor is in the shape of the famous Hurter and Driffield densitometry curves of photographic film. Oversampled time-gated binary image capture is demonstrated, capturing 3D TOF images with 3.8cm precision in a 60cm range

    Miniature high dynamic range time-resolved CMOS SPAD image sensors

    Get PDF
    Since their integration in complementary metal oxide (CMOS) semiconductor technology in 2003, single photon avalanche diodes (SPADs) have inspired a new era of low cost high integration quantum-level image sensors. Their unique feature of discerning single photon detections, their ability to retain temporal information on every collected photon and their amenability to high speed image sensor architectures makes them prime candidates for low light and time-resolved applications. From the biomedical field of fluorescence lifetime imaging microscopy (FLIM) to extreme physical phenomena such as quantum entanglement, all the way to time of flight (ToF) consumer applications such as gesture recognition and more recently automotive light detection and ranging (LIDAR), huge steps in detector and sensor architectures have been made to address the design challenges of pixel sensitivity and functionality trade-off, scalability and handling of large data rates. The goal of this research is to explore the hypothesis that given the state of the art CMOS nodes and fabrication technologies, it is possible to design miniature SPAD image sensors for time-resolved applications with a small pixel pitch while maintaining both sensitivity and built -in functionality. Three key approaches are pursued to that purpose: leveraging the innate area reduction of logic gates and finer design rules of advanced CMOS nodes to balance the pixel’s fill factor and processing capability, smarter pixel designs with configurable functionality and novel system architectures that lift the processing burden off the pixel array and mediate data flow. Two pathfinder SPAD image sensors were designed and fabricated: a 96 × 40 planar front side illuminated (FSI) sensor with 66% fill factor at 8.25μm pixel pitch in an industrialised 40nm process and a 128 × 120 3D-stacked backside illuminated (BSI) sensor with 45% fill factor at 7.83μm pixel pitch. Both designs rely on a digital, configurable, 12-bit ripple counter pixel allowing for time-gated shot noise limited photon counting. The FSI sensor was operated as a quanta image sensor (QIS) achieving an extended dynamic range in excess of 100dB, utilising triple exposure windows and in-pixel data compression which reduces data rates by a factor of 3.75×. The stacked sensor is the first demonstration of a wafer scale SPAD imaging array with a 1-to-1 hybrid bond connection. Characterisation results of the detector and sensor performance are presented. Two other time-resolved 3D-stacked BSI SPAD image sensor architectures are proposed. The first is a fully integrated 5-wire interface system on chip (SoC), with built-in power management and off-focal plane data processing and storage for high dynamic range as well as autonomous video rate operation. Preliminary images and bring-up results of the fabricated 2mm² sensor are shown. The second is a highly configurable design capable of simultaneous multi-bit oversampled imaging and programmable region of interest (ROI) time correlated single photon counting (TCSPC) with on-chip histogram generation. The 6.48μm pitch array has been submitted for fabrication. In-depth design details of both architectures are discussed

    Design and Characterization of a Current Assisted Photo Mixing Demodulator for Tof Based 3d Cmos Image Sensor

    Get PDF
    Due to the increasing demand for 3D vision systems, many efforts have been recently concentrated to achieve complete 3D information analogous to human eyes. Scannerless optical range imaging systems are emerging as an interesting alternative to conventional intensity imaging in a variety of applications, including pedestrian security, biomedical appliances, robotics and industrial control etc. For this, several studies have reported to produce 3D images including stereovision, object distance from vision system and structured light source with high frame rate, accuracy, wide dynamic range, low power consumption and lower cost. Several types of optical techniques for 3D imaging range measurement are available in the literature, among them one of the most important is time-of-flight (TOF) principle that is intensively investigated. The third dimension, i.e. depth information, can be determined by correlating the reflected modulated light signal from the scene with a reference signal synchronous with the light source modulation signal. CMOS image sensors are capable of integrating the image processing circuitry on the same chip as the light sensitive elements. As compared to other imaging technologies, they have the advantages of lower power consumption and potentially lower price. The merits make this technology competent for the next-generation solid-state imaging applications. However, CMOS process technologies are developed for high-performance digital circuits. Different types of 3D photodetectors have been proposed for three-dimensional imaging. A major performance improvement has been found in the adoption of inherently mixing detectors that incorporate the role of detection and demodulation in a single device. Basically, these devices use a modulated electric field to guide the photo generated charge carriers to different collection sites in phase with a modulation signal. One very promising CMOS photonic demodulator based on substrate current modulation has recently been proposed. In this device the electric field penetrates deeper into the substrate, thus enhancing the charge separation and collection mechanism. A very good sensitivity and high demodulation efficiency can be achieved. The objective of this thesis has been the design and characterization of a Current Assisted Photo mixing Demodulator (CAPD) to be applied in a TOF based 3D CMOS sensing system. At first, the experimental investigation of the CAPD device is carried out. As a test vehicle, 10×10 pixel arrays have been fabricated in 0.18µm CMOS technology with 10×10 µm2 pixel size. The main properties of CAPD devices, such as the charge transfer characteristic, modulation contrast, noise performance and non-linearity problem, etc. have been simulated and experimentally evaluated. Experimental results demonstrate a good DC charge separation efficiency and good dynamic demodulation capabilities up to 45MHz. The influence of performance parameters such as wavelength, modulation frequency and voltage on this device is also discussed. This test device corresponds to the first step towards incorporating a high resolution TOF based 3D CMOS image sensor. The demodulator structure featuring a remarkably small pixel size 10 × 10 µm2 is used to realize a 120 × 160 pixel array of ranging sensor fabricated in standard 0.18µm CMOS technology. Initial results demonstrate that the demodulator structure is suitable for a real-time 3D image sensor. The prototype camera system is capable of providing real-time distance measurements of a scene through modulated-wave TOF measurements with a modulation frequency 20 MHz. In the distance measurement, the sensor array provides a linear distance range from 1.2m to 3.7m with maximum accuracy error 3.3% and maximum pixel noise 8.5% at 3.7m distance. Extensive testing of the device and prototype camera system has been carried out to gain insight into the characteristics of this device, which is a good candidate for integration in large arrays for time-of-flight based 3D CMOS image sensor in the near future
    corecore