10 research outputs found

    Bio-Inspired Multi-Spectral Image Sensor and Augmented Reality Display for Near-Infrared Fluorescence Image-Guided Surgery

    Get PDF
    Background: Cancer remains a major public health problem worldwide and poses a huge economic burden. Near-infrared (NIR) fluorescence image-guided surgery (IGS) utilizes molecular markers and imaging instruments to identify and locate tumors during surgical resection. Unfortunately, current state-of-the-art NIR fluorescence imaging systems are bulky, costly, and lack both fluorescence sensitivity under surgical illumination and co-registration accuracy between multimodal images. Additionally, the monitor-based display units are disruptive to the surgical workflow and are suboptimal at indicating the 3-dimensional position of labeled tumors. These major obstacles have prevented the wide acceptance of NIR fluorescence imaging as the standard of care for cancer surgery. The goal of this dissertation is to enhance cancer treatment by developing novel image sensors and presenting the information using holographic augmented reality (AR) display to the physician in intraoperative settings. Method: By mimicking the visual system of the Morpho butterfly, several single-chip, color-NIR fluorescence image sensors and systems were developed with CMOS technologies and pixelated interference filters. Using a holographic AR goggle platform, an NIR fluorescence IGS display system was developed. Optoelectronic evaluation was performed on the prototypes to evaluate the performance of each component, and small animal models and large animal models were used to verify the overall effectiveness of the integrated systems at cancer detection. Result: The single-chip bio-inspired multispectral logarithmic image sensor I developed has better main performance indicators than the state-of-the-art NIR fluorescence imaging instruments. The image sensors achieve up to 140 dB dynamic range. The sensitivity under surgical illumination achieves 6108 V/(mW/cm2), which is up to 25 times higher. The signal-to-noise ratio is up to 56 dB, which is 11 dB greater. These enable high sensitivity fluorescence imaging under surgical illumination. The pixelated interference filters enable temperature-independent co-registration accuracy between multimodal images. Pre-clinical trials with small animal model demonstrate that the sensor can achieve up to 95% sensitivity and 94% specificity with tumor-targeted NIR molecular probes. The holographic AR goggle provides the physician with a non-disruptive 3-dimensional display in the clinical setup. This is the first display system that co-registers a virtual image with human eyes and allows video rate image transmission. The imaging system is tested in the veterinary science operating room on canine patients with naturally occurring cancers. In addition, a time domain pulse-width-modulation address-event-representation multispectral image sensor and a handheld multispectral camera prototype are developed. Conclusion: The major problems of current state-of-the-art NIR fluorescence imaging systems are successfully solved. Due to enhanced performance and user experience, the bio-inspired sensors and augmented reality display system will give medical care providers much needed technology to enable more accurate value-based healthcare

    CMOS SPAD-based image sensor for single photon counting and time of flight imaging

    Get PDF
    The facility to capture the arrival of a single photon, is the fundamental limit to the detection of quantised electromagnetic radiation. An image sensor capable of capturing a picture with this ultimate optical and temporal precision is the pinnacle of photo-sensing. The creation of high spatial resolution, single photon sensitive, and time-resolved image sensors in complementary metal oxide semiconductor (CMOS) technology offers numerous benefits in a wide field of applications. These CMOS devices will be suitable to replace high sensitivity charge-coupled device (CCD) technology (electron-multiplied or electron bombarded) with significantly lower cost and comparable performance in low light or high speed scenarios. For example, with temporal resolution in the order of nano and picoseconds, detailed three-dimensional (3D) pictures can be formed by measuring the time of flight (TOF) of a light pulse. High frame rate imaging of single photons can yield new capabilities in super-resolution microscopy. Also, the imaging of quantum effects such as the entanglement of photons may be realised. The goal of this research project is the development of such an image sensor by exploiting single photon avalanche diodes (SPAD) in advanced imaging-specific 130nm front side illuminated (FSI) CMOS technology. SPADs have three key combined advantages over other imaging technologies: single photon sensitivity, picosecond temporal resolution and the facility to be integrated in standard CMOS technology. Analogue techniques are employed to create an efficient and compact imager that is scalable to mega-pixel arrays. A SPAD-based image sensor is described with 320 by 240 pixels at a pitch of 8μm and an optical efficiency or fill-factor of 26.8%. Each pixel comprises a SPAD with a hybrid analogue counting and memory circuit that makes novel use of a low-power charge transfer amplifier. Global shutter single photon counting images are captured. These exhibit photon shot noise limited statistics with unprecedented low input-referred noise at an equivalent of 0.06 electrons. The CMOS image sensor (CIS) trends of shrinking pixels, increasing array sizes, decreasing read noise, fast readout and oversampled image formation are projected towards the formation of binary single photon imagers or quanta image sensors (QIS). In a binary digital image capture mode, the image sensor offers a look-ahead to the properties and performance of future QISs with 20,000 binary frames per second readout with a bit error rate of 1.7 x 10-3. The bit density, or cumulative binary intensity, against exposure performance of this image sensor is in the shape of the famous Hurter and Driffield densitometry curves of photographic film. Oversampled time-gated binary image capture is demonstrated, capturing 3D TOF images with 3.8cm precision in a 60cm range

    3D Imaging based on Single Photon Detectors

    Get PDF
    This paper introduces a mathematical model to evaluate fast and cost-effective 3D image sensors based on single photon detectors. The model will help engineers evaluate design parameters based on operating conditions and system performance. Ranging is based on the time-of-flight principle using TCSPC techniques. Two scenarios are discussed: (i) short-distance range indoors and (ii) medium-distance range outdoors. The model predicts an accuracy of 0.25cm at 5m with 0.5W of illumination and 1klux of background light. In the medium-range scenario, a precision of 0.5cm is predicted at 50m with 20W of illumination and 20klux of background light

    CMOS Sensors for Time-Resolved Active Imaging

    Full text link
    In the past decades, time-resolved imaging such as fluorescence lifetime or time-of-flight depth imaging has been extensively explored in biomedical and industrial fields because of its non-invasive characterization of material properties and remote sensing capability. Many studies have shown its potential and effectiveness in applications such as cancer detection and tissue diagnoses from fluorescence lifetime imaging, and gesture/motion sensing and geometry sensing from time-of-flight imaging. Nonetheless, time-resolved imaging has not been widely adopted due to the high cost of the system and performance limits. The research presented in this thesis focuses on the implementation of low-cost real-time time-resolved imaging systems. Two image sensing schemes are proposed and implemented to address the major limitations. First, we propose a single-shot fluorescence lifetime image sensors for high speed and high accuracy imaging. To achieve high accuracy, previous approaches repeat the measurement for multiple sampling, resulting in long measurement time. On the other hand, the proposed method achieves both high speed and accuracy at the same time by employing a pixel-level processor that takes and compresses the multiple samples within a single measurement time. The pixels in the sensor take multiple samples from the fluorescent optical signal in sub-nanosecond resolution and compute the average photon arrival time of the optical signal. Thanks to the multiple sampling of the signal, the measurement is insensitive to the shape or the pulse-width of excitation, providing better accuracy and pixel uniformity than conventional rapid lifetime determination (RLD) methods. The proposed single-shot image sensor also improves the imaging speed by orders of magnitude compared to other conventional center-of-mass methods (CMM). Second, we propose a 3-D camera with a background light suppression scheme which is adaptable to various lighting conditions. Previous 3-D cameras are not operable in outdoor conditions because they suffer from measurement errors and saturation problems under high background light illumination. We propose a reconfigurable architecture with column-parallel discrete-time background light cancellation circuit. Implementing the processor at the column level allows an order of magnitude reduction in pixel size as compared to existing pixel-level processors. The column-level approach also provides reconfigurable operation modes for optimal performance in all lighting conditions. For example, the sensor can operate at the best frame-rate and resolution without the presence of background light. If the background light saturates the sensor or increases the shot noise, the sensor can adjust the resolution and frame-rate by pixel binning and superresolution techniques. This effectively enhances the well capacity of the pixel to compensate for the increase shot noise, and speeds up the frame processing to handle the excessive background light. A fabricated prototype sensor can suppress the background light more than 100-klx while achieving a very small pixel size of 5.9μm.PHDElectrical EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttps://deepblue.lib.umich.edu/bitstream/2027.42/136950/1/eecho_1.pd

    High Performance CMOS Range Imaging

    Get PDF
    Diese Arbeit fokussiert sich auf die Modellierung, Charakterisierung und Optimierung von Rauschen um den Entwurf von hochperformanten CMOS-Bildsensoren im Allgemeinen und von distanzgebenden Bildsensoren im Speziellen zu unterstützen. CMOS Bildsensorik ist bekannt dafür, den CCD-Sensoren bezüglich Flexibilität überlegen zu sein, aber modifizierter Prozesse zu bedürfen um vergleichbare Leistung in Parametern wie Rauschen, Dynamik oder Empfindlichkeit zu erreichen. Rauschen wird als einer der wichtigsten Parameter erachtet, da es die erreichbare Genauigkeit maßgeblich limitiert und nicht korrigiert werden kann. Diese Thesis präsentiert einen Überblick über die weit gefächerte Theorie des Rauschens und fügt ihr eine Methodik hinzu die Rauschperformance von zeitlich abgetasteten Systemen zu schätzen. Eine Charakterisierung der verfügbaren Bauelemente des verwendeten 0:35 µm 2P4M CMOS-Prozesses wurde durchgeführt und anhand heuristischer Betrachtungen und dem Kenntnisstand der Rausch-Theorie evaluiert. Diese fundamentalen Untersuchungen werden als Grundlage erachtet, die Vorhersagbarkeit der Rauschperformance von z.B. Bildsensoren zu verbessern. Rauschquellen von Fotodetektoren wurden in der Vergangenheit erforscht, wobei viele mit der Einführung der PPD minimiert werden konnten. Üblicherweise sind die verbleibenden dominanten Rauschquellen das Resetrauschen und das Rauschen der Ausleseschaltung. Um Letzteres zu verbessern, wurde eine neuartige JFET-basierte Auslesestruktur entwickelt, welche im Vergleich zu verfügbaren Standard-MOSFETs eine um ca. Faktor 100 verbesserte Rauschperformance für niedrige Frequenzen aufweist. ToF wird als eine Schlüssel-Technologie erachtet, die neue Applikationen z.B. in Machine Vision, Automobil, Surveillance und Unterhaltungselektronik ermöglicht. Das konkurrierende CW-Verfahren ist bekannt dafür, anfällig bzgl. Störungen z.B. durch Hintergrundbestrahlung zu sein. Das PM-ToF-Prinzip wird als eine vielversprechende Methode für widrige Bedingungen erachtet, die allerdings eines schnellen Fotodetektors bedarf. Diese Arbeit trug zu zwei Generationen von LDPD basierten ToF-Bildsensoren bei und präsentiert eine alternative Implementierung des MSI-PM-ToF Verfahrens. Es wurde nachgewiesen, dass diese eine wesentlich bessere Performance bzgl. Geschwindigkeit, Linearität, Dunkelstrom und Matching bietet. Ferner bietet diese Arbeit ein nichtlineares und zeitvariantes Modell des realisierten Sensorprinzips, welches ungewünschte Phänomene wie die endliche Ladungsträgergeschwindigkeit und eine parasitäre Fotoempfindlichkeit der Speicherknoten berücksichtigt, um Großsignal-, Sensitivitäts- und Rauschperformance erforschen zu können. Es wurde gezeigt, dass das Modell gegen ein "Standard"-Modell konvergiert und die Messungen gut nachbildet. Letztlich wurde die Auswirkung dieser ungewünschten Phänomene auf die Performance der Distanzmessung präsentiert.This work is dedicated to CMOS based imaging with the emphasis on the noise modeling, characterization and optimization in order to contribute to the design of high performance imagers in general and range imagers in particular. CMOS is known to be superior to CCD due to its flexibility in terms of integration capabilities, but typically has to be enhanced to compete at parameters as for instance noise, dynamic range or spectral response. Temporal noise is an important topic, since it is one of the most crucial parameters that ultimately limits the performance and cannot be corrected. This thesis gathers the widespread theory on noise and extends the theory by a non-rigorous but potentially computing efficient algorithm to estimate noise in time sampled systems. The available devices of the 0:35 µm 2P4M CMOS process were characterized for their low-frequency noise performance and mutually compared by heuristic observations and a comparison to the state of research. These investigations set the foundation for a more rigorous treatment of noise exhibition and are thus believed to improve the predictability of the performance of e.g. image sensors. Many noise sources of CMOS APS have been investigated in the past and most of them can be minimized by usage of a PPD as a photodetector. Remaining dominant noise sources typically are the reset noise and the noise from the readout circuitry. In order to improve the latter, an alternative JFET based readout structure is proposed that was designed, manufactured and measured, proving the superior low-frequency noise performance of approximately a factor of 100 compared to standard MOSFETs. ToF is one key technology to enable new applications in e.g. machine vision, automotive, surveillance or entertainment. The competing CW principle is known to be prone to errors introduced by e.g. high ambient illuminance levels. The PM ToF principle is considered to be a promising method to supply the need for depth-map perception in harsh environmental conditions, but requires a high-speed photodetector. This work contributed to two generations of LDPD based ToF range image sensors and proposed a new approach to implement the MSI PM ToF principle. This was verified to yield a significantly faster charge transfer, better linearity, dark current and matching performance. A non-linear and time-variant model is provided that takes into account undesired phenomena such as finite charge transfer speed and a parasitic sensitivity to light when the shutters should remain OFF, to allow for investigations of large-signal characteristics, sensitivity and precision. It was demonstrated that the model converges to a standard photodetector model and properly resembles the measurements. Finally the impact of these undesired phenomena on the range measurement performance is demonstrated

    Miniature high dynamic range time-resolved CMOS SPAD image sensors

    Get PDF
    Since their integration in complementary metal oxide (CMOS) semiconductor technology in 2003, single photon avalanche diodes (SPADs) have inspired a new era of low cost high integration quantum-level image sensors. Their unique feature of discerning single photon detections, their ability to retain temporal information on every collected photon and their amenability to high speed image sensor architectures makes them prime candidates for low light and time-resolved applications. From the biomedical field of fluorescence lifetime imaging microscopy (FLIM) to extreme physical phenomena such as quantum entanglement, all the way to time of flight (ToF) consumer applications such as gesture recognition and more recently automotive light detection and ranging (LIDAR), huge steps in detector and sensor architectures have been made to address the design challenges of pixel sensitivity and functionality trade-off, scalability and handling of large data rates. The goal of this research is to explore the hypothesis that given the state of the art CMOS nodes and fabrication technologies, it is possible to design miniature SPAD image sensors for time-resolved applications with a small pixel pitch while maintaining both sensitivity and built -in functionality. Three key approaches are pursued to that purpose: leveraging the innate area reduction of logic gates and finer design rules of advanced CMOS nodes to balance the pixel’s fill factor and processing capability, smarter pixel designs with configurable functionality and novel system architectures that lift the processing burden off the pixel array and mediate data flow. Two pathfinder SPAD image sensors were designed and fabricated: a 96 × 40 planar front side illuminated (FSI) sensor with 66% fill factor at 8.25μm pixel pitch in an industrialised 40nm process and a 128 × 120 3D-stacked backside illuminated (BSI) sensor with 45% fill factor at 7.83μm pixel pitch. Both designs rely on a digital, configurable, 12-bit ripple counter pixel allowing for time-gated shot noise limited photon counting. The FSI sensor was operated as a quanta image sensor (QIS) achieving an extended dynamic range in excess of 100dB, utilising triple exposure windows and in-pixel data compression which reduces data rates by a factor of 3.75×. The stacked sensor is the first demonstration of a wafer scale SPAD imaging array with a 1-to-1 hybrid bond connection. Characterisation results of the detector and sensor performance are presented. Two other time-resolved 3D-stacked BSI SPAD image sensor architectures are proposed. The first is a fully integrated 5-wire interface system on chip (SoC), with built-in power management and off-focal plane data processing and storage for high dynamic range as well as autonomous video rate operation. Preliminary images and bring-up results of the fabricated 2mm² sensor are shown. The second is a highly configurable design capable of simultaneous multi-bit oversampled imaging and programmable region of interest (ROI) time correlated single photon counting (TCSPC) with on-chip histogram generation. The 6.48μm pitch array has been submitted for fabrication. In-depth design details of both architectures are discussed

    A low-voltage CMOS-compatible time-domain photodetector, device & front end electronics

    Get PDF
    During the last decades, the usage of silicon photodetectors, both as stand-alone sensor or integrated in arrays, grew tremendously. They are now found in almost any application and any market range, from leisure products to high-end scientific apparatuses, including, among others, industrial, automotive, and medical equipment. The impressive growth in photodetector applications is closely linked to the development of CMOS technology, which now offers inexpensive and efficient analog and digi-tal signal processing capabilities. Detectors are often integrated with their respective front end and application-specific digital circuit on the same silicon die, forming complete systems on chip. In some cases the detector itself is not on the same chip but often part of the same package. However, this trend of co-integration of analog front end and digital circuits complicates the design of the analog part. The ever-decreasing supply voltage and the smaller transistors in advanced processes (which are driven by the development of digital cir-cuits) negatively impact the performance of the analog structures and complicates their design. For photodetector systems, the effect most importantly translates into a degradation of dynamic range and signal-to-noise ratio. One way to circumvent the problem of low supply voltages is to shift the operation from voltage domain to time domain. By doing so, the signal is no longer constrained by the supply rails and analog amplification is avoided. The signal takes the form of a time-based modulation, such as pulse-width modulation or pulse-frequency modulation. Another advantage is that the output signal of a time-domain photodetection system is directly interfaceable with digital circuits. In this work, a new type of CMOS-compatible photodetector displaying intrinsic light-to-time conversion is proposed. Its physical structure consists of a MOS gate interleaved with a PN junction. The MOS structure is acting as a photogate. The depletion region shrinks when photogenerated carriers fill the potential well. At some point, the anode of the PN structure is de-isolated from the rest of the detector and triggers a positive-feedback effect that leads to a very steep current increase through the PN-junction. This translates into a signal of very high amplitude and independent from light-intensity, which can be almost directly interfaced with digital circuits. This simplifies the front end circuit compared to photodiode-based systems. The physical behavior of the device is analyzed with the help of TCAD simulations and simple behavioral and shot-noise models are proposed. The device has been co-integrated with its driver and front end circuit in a standard CMOS process and its characteristics have been measured with a custom-made measurement system. The effect of bias parameters on the performance of the sensor are also analyzed. The limitations of the device are discussed, the most important ones being dark current and linearity. Techno-logical solutions, such as the implementation of the detector on Silicon-on-Insulator technology, are proposed to overcome the limitations. Finally, some application demonstrators have been realized. Other applications that could benefit from the detector are suggested, such as digital applications taking advantage of the latching behavior of the device, and a Photoplethysmography (PPG) system that uses a PLL-based control loop to minimize the emitting LED-current

    Development of a Full-Field Time-of-Flight Range Imaging System

    Get PDF
    A full-field, time-of-flight, image ranging system or 3D camera has been developed from a proof-of-principle to a working prototype stage, capable of determining the intensity and range for every pixel in a scene. The system can be adapted to the requirements of various applications, producing high precision range measurements with sub-millimetre resolution, or high speed measurements at video frame rates. Parallel data acquisition at each pixel provides high spatial resolution independent of the operating speed. The range imaging system uses a heterodyne technique to indirectly measure time of flight. Laser diodes with highly diverging beams are intensity modulated at radio frequencies and used to illuminate the scene. Reflected light is focused on to an image intensifier used as a high speed optical shutter, which is modulated at a slightly different frequency to that of the laser source. The output from the shutter is a low frequency beat signal, which is sampled by a digital video camera. Optical propagation delay is encoded into the phase of the beat signal, hence from a captured time variant intensity sequence, the beat signal phase can be measured to determine range for every pixel in the scene. A direct digital synthesiser (DDS) is designed and constructed, capable of generating up to three outputs at frequencies beyond 100 MHz with the relative frequency stability in excess of nine orders of magnitude required to control the laser and shutter modulation. Driver circuits were also designed to modulate the image intensifier photocathode at 50 Vpp, and four laser diodes with a combined power output of 320 mW, both over a frequency range of 10-100 MHz. The DDS, laser, and image intensifier response are characterised. A unique method of measuring the image intensifier optical modulation response is developed, requiring the construction of a pico-second pulsed laser source. This characterisation revealed deficiencies in the measured responses, which were mitigated through hardware modifications where possible. The effects of remaining imperfections, such as modulation waveform harmonics and image intensifier irising, can be calibrated and removed from the range measurements during software processing using the characterisation data. Finally, a digital method of generating the high frequency modulation signals using a FPGA to replace the analogue DDS is developed, providing a highly integrated solution, reducing the complexity, and enhancing flexibility. In addition, a novel modulation coding technique is developed to remove the undesirable influence of waveform harmonics from the range measurement without extending the acquisition time. When combined with a proposed modification to the laser illumination source, the digital system can enhance range measurement precision and linearity. From this work, a flexible full-field image ranging system is successfully realised. The system is demonstrated operating in a high precision mode with sub-millimetre depth resolution, and also in a high speed mode operating at video update rates (25 fps), in both cases providing high (512 512) spatial resolution over distances of several metres

    A QVGA-size CMOS time-of-flight range image sensor with background light charge draining structure

    No full text
    corecore