84 research outputs found

    Time-of-Flight Sensors in standard CMSO technologies

    Get PDF
    The goal of this PhD thesis is the design of time-of-flight sensors in standard CMOS technologies. For this, device level and circuit level task will be addressed. In the first case we will model and characterize the sensory structure. In the second case we will design the necessary circuitry to read the information captured by the sensors. The thesis will begin with the study of non-conventional photosensor structures in standard CMOS technologies, and will continue with the design of a specific circuitry in this technology. Finally, the selected design will be fabricated and tested

    Distance Measurement Error in Time-of-Flight Sensors Due to Shot Noise

    Get PDF
    Unlike other noise sources, which can be reduced or eliminated by different signal processing techniques, shot noise is an ever-present noise component in any imaging system. In this paper, we present an in-depth study of the impact of shot noise on time-of-flight sensors in terms of the error introduced in the distance estimation. The paper addresses the effect of parameters, such as the size of the photosensor, the background and signal power or the integration time, and the resulting design trade-offs. The study is demonstrated with different numerical examples, which show that, in general, the phase shift determination technique with two background measurements approach is the most suitable for pixel arrays of large resolutionThis work has been partially funded by Spanish government Project TEC2012-38921-C02-02 MINECO(FEDER) and by the Xunta de Galicia with EM2013/038 and EM2014/012, AE CITIUS(CN2012/151, (FEDER)) and GPC2013/040 (FEDER)S

    Optimized Distance Measurement with 3D-CMOS Image Sensor and Real Time Processing of the 3D Data for Applications in Automotive and Safety Engineering

    Get PDF
    This thesis describes and characterizes an advanced range camera for the distance range from 2 m to 25 m and novel real-time 3D image processing algorithms for object detection, tracking and classification on the basis of the three-dimensional features of the camera output data. The technology is based on a 64x8 pixel array CMOS image sensor which is capable of capturing three-dimensional images. This is accomplished by executing indirect time of flight measurement of NIR laser pulses emitted by the camera and reflected by the objects in the field of view of the camera. An analytic description of the measurement signals and a derivation of the distance measuring algorithms are conducted in this thesis as well as a comparative examination of the distance measuring algorithms by calculation, simulation and experiments; in doing so, the MDSI3 algorithm showed the best results over the whole measurement range and is thus chosen as standard method of the distance measuring system. A camera prototype was developed with a measurement accuracy in the centimeter range at an image repetition rate up to 100 Hz; a detailed evaluation of the components and of the over-all system is presented. Main aspects are the characterization of the time critical measurement signals, of the system noise, and of the distance measuring capabilities. Furthermore this thesis introduces novel real-time image processing of the output data stream of the camera aiming at the detection of objects being located in the observed area and the derivation of reliable position, speed and acceleration estimates. The used segmentation algorithm utilizes all three spatial dimensions of the position information as well as the intensity values and thus yields significant improvement compared to segmentation in conventional 2D images. Position, velocity, and acceleration values of the segmented objects are estimated by means of Kalman filtering in 3D space. The filter is dynamically adapted to the measurement properties of the according object to take care of changes of the data properties. The good performance of the image processing algorithms is presented by means of example scenes.Optimierte Entfernungsmessung mit 3D-CMOS Bildsensor und Echtzeit-Verarbeitung der 3D-Daten für Anwendungen in der Automobil- und Sicherheitstechnik Diese Arbeit beschreibt und charakterisiert eine neu entwickelte Entfernungskamera für Reichweiten von 2 m bis 25 m und spezielle 3D-Echtzeit-Bildverarbeitungsalgorithmen zum Detektieren, Tracken und Klassifizieren von Objekten auf der Grundlage der dreidimensionalen Kameradaten. Die Technologie basiert auf einem 64x8 Pixel CMOS Bildsensor, welcher im Stande ist, dreidimensionale Szenen zu erfassen. Dies wird mittels indirekter Laufzeitmessung von NIR Laserpulsen, die von der Kamera ausgesandt und an Objekten im Blickfeld der Kamera reflektiert werden, realisiert. Eine analytische Beschreibung der Messsignale und eine darauf aufbauende Herleitung der verschiedenartigen Entfernungsmessungsalgorithmen wird in dieser Arbeit ebenso durchgefhrt, wie die vergleichende Betrachtung der Entfernungsmessungsalgorithmen durch Rechnung, Simulation und Experimente; dabei zeigt der MDSI3-Algorithmus die besten Ergebnisse über den gesamten Messbereich, und wird deshalb zum Standardalgorithmus des Entfernungsmesssystems. Ein Kameraprototyp mit Messgenauigkeiten im cm-Bereich bei einer Bildwiederholrate bis zu 100 Hz wurde entwickelt; eine detaillierte Evaluierung der Komponenten und des Systems ist hier beschrieben. Hauptaspekte sind dabei die Charakterisierung der zeitkritischen Messsignale, des Systemrauschens und der Entfernungsmesseigenschaften. Desweiteren wird in dieser Arbeit die neu entwickelte Echtzeit-Bildverarbeitung des Kameradatenstroms vorgestellt, die auf die Detektion von Objekten im Beobachtungsbereich und die verlässliche Ermittlung von Positions-,Geschwindigkeits- und Beschleunigungsschtzwerten abzielt. Der dabei verwendete Segmentierungsalgorithmus nutzt alle drei Dimensionen der Positionsmesswerte kombiniert mit den Intensitätswerten der Messsignale, und liefert so eine signifikante Verbesserung im Vergleich zur Segmentierung in konventionellen 2D Bildern. Position, Geschwindigkeit und Beschleunigung werden mit Hilfe eines Kalman-Filters im 3-dimensionalen Raum geschätzt. Das Filter passt sich dynamisch den Messbedingungen des jeweils gemessenen Objekts an, und berücksichtigt so Veränderungen der Dateneigenschaften. Die Leistungsfähigkeit der Bildverarbeitungsalgorithmen wird anhand von Beispielszenen demonstriert

    Large-scale single-photon imaging

    Full text link
    Benefiting from its single-photon sensitivity, single-photon avalanche diode (SPAD) array has been widely applied in various fields such as fluorescence lifetime imaging and quantum computing. However, large-scale high-fidelity single-photon imaging remains a big challenge, due to the complex hardware manufacture craft and heavy noise disturbance of SPAD arrays. In this work, we introduce deep learning into SPAD, enabling super-resolution single-photon imaging over an order of magnitude, with significant enhancement of bit depth and imaging quality. We first studied the complex photon flow model of SPAD electronics to accurately characterize multiple physical noise sources, and collected a real SPAD image dataset (64 ×\times 32 pixels, 90 scenes, 10 different bit depth, 3 different illumination flux, 2790 images in total) to calibrate noise model parameters. With this real-world physical noise model, we for the first time synthesized a large-scale realistic single-photon image dataset (image pairs of 5 different resolutions with maximum megapixels, 17250 scenes, 10 different bit depth, 3 different illumination flux, 2.6 million images in total) for subsequent network training. To tackle the severe super-resolution challenge of SPAD inputs with low bit depth, low resolution, and heavy noise, we further built a deep transformer network with a content-adaptive self-attention mechanism and gated fusion modules, which can dig global contextual features to remove multi-source noise and extract full-frequency details. We applied the technique on a series of experiments including macroscopic and microscopic imaging, microfluidic inspection, and Fourier ptychography. The experiments validate the technique's state-of-the-art super-resolution SPAD imaging performance, with more than 5 dB superiority on PSNR compared to the existing methods

    Advanced Image Acquisition, Processing Techniques and Applications

    Get PDF
    "Advanced Image Acquisition, Processing Techniques and Applications" is the first book of a series that provides image processing principles and practical software implementation on a broad range of applications. The book integrates material from leading researchers on Applied Digital Image Acquisition and Processing. An important feature of the book is its emphasis on software tools and scientific computing in order to enhance results and arrive at problem solution

    CMOS SPAD-based image sensor for single photon counting and time of flight imaging

    Get PDF
    The facility to capture the arrival of a single photon, is the fundamental limit to the detection of quantised electromagnetic radiation. An image sensor capable of capturing a picture with this ultimate optical and temporal precision is the pinnacle of photo-sensing. The creation of high spatial resolution, single photon sensitive, and time-resolved image sensors in complementary metal oxide semiconductor (CMOS) technology offers numerous benefits in a wide field of applications. These CMOS devices will be suitable to replace high sensitivity charge-coupled device (CCD) technology (electron-multiplied or electron bombarded) with significantly lower cost and comparable performance in low light or high speed scenarios. For example, with temporal resolution in the order of nano and picoseconds, detailed three-dimensional (3D) pictures can be formed by measuring the time of flight (TOF) of a light pulse. High frame rate imaging of single photons can yield new capabilities in super-resolution microscopy. Also, the imaging of quantum effects such as the entanglement of photons may be realised. The goal of this research project is the development of such an image sensor by exploiting single photon avalanche diodes (SPAD) in advanced imaging-specific 130nm front side illuminated (FSI) CMOS technology. SPADs have three key combined advantages over other imaging technologies: single photon sensitivity, picosecond temporal resolution and the facility to be integrated in standard CMOS technology. Analogue techniques are employed to create an efficient and compact imager that is scalable to mega-pixel arrays. A SPAD-based image sensor is described with 320 by 240 pixels at a pitch of 8μm and an optical efficiency or fill-factor of 26.8%. Each pixel comprises a SPAD with a hybrid analogue counting and memory circuit that makes novel use of a low-power charge transfer amplifier. Global shutter single photon counting images are captured. These exhibit photon shot noise limited statistics with unprecedented low input-referred noise at an equivalent of 0.06 electrons. The CMOS image sensor (CIS) trends of shrinking pixels, increasing array sizes, decreasing read noise, fast readout and oversampled image formation are projected towards the formation of binary single photon imagers or quanta image sensors (QIS). In a binary digital image capture mode, the image sensor offers a look-ahead to the properties and performance of future QISs with 20,000 binary frames per second readout with a bit error rate of 1.7 x 10-3. The bit density, or cumulative binary intensity, against exposure performance of this image sensor is in the shape of the famous Hurter and Driffield densitometry curves of photographic film. Oversampled time-gated binary image capture is demonstrated, capturing 3D TOF images with 3.8cm precision in a 60cm range

    A vector light sensor for 3D proximity applications: Designs, materials, and applications

    Get PDF
    In this thesis, a three-dimensional design of a vector light sensor for angular proximity detection applications is realized. 3D printed mesa pyramid designs, along with commercial photodiodes, were used as a prototype for the experimental verification of single-pixel and two-pixel systems. The operation principles, microfabrication details, and experimental verification of micro-sized mesa and CMOS-compatible inverse vector light pixels in silicon are presented, where p-n junctions are created on pyramid’s facets as photodiodes. The one-pixel system allows for angular estimations, providing spatial proximity of incident light in 2D and 3D. A two-pixel system was further demonstrated to have a wider-angle detection. Multilayered carbon nanotubes, graphene, and vanadium oxide thin films as well as carbon nanoparticles-based composites were studied along with cost effective deposition processes to incorporate these films onto 3D mesa structures. Combining such design and materials optimizations produces sensors with a unique design, simple fabrication process, and readout integrated circuits’ compatibility. Finally, an approach to utilize such sensors in smart energy system applications as solar trackers, for automated power generation optimizations, is explored. However, integration optimizations in complementary-Si PV solar modules were first required. In this multi-step approach, custom composite materials are utilized to significantly enhance the reliability in bifacial silicon PV solar modules. Thermal measurements and process optimizations in the development of imec’s novel interconnection technology in solar applications are discussed. The interconnection technology is used to improve solar modules’ performance and enhance the connectivity between modules’ cells and components. This essential precursor allows for the effective powering and consistent operations of standalone module-associated components, such as the solar tracker and Internet of Things sensing devices, typically used in remote monitoring of modules’ performance or smart energy systems. Such integrations and optimizations in the interconnection technology improve solar modules’ performance and reliability, while further reducing materials and production costs. Such advantages further promote solar (Si) PV as a continuously evolving renewable energy source that is compatible with new waves of smart city technology and systems

    Diseño CMOS de un sistema de visión “on-chip” para aplicaciones de muy alta velocidad

    Get PDF
    Falta palabras claveEsta Tesis presenta arquitecturas, circuitos y chips para el diseño de sensores de visión CMOS con procesamiento paralelo embebido. La Tesis reporta dos chips, en concreto: El chip Q-Eye; El chip Eye-RIS_VSoC.. Y dos sistemas de visión construidos con estos chips y otros sistemas “off-chip” adicionales, como FPGAs, en concreto: El sistema Eye-RIS_v1; El sistema Eye-RIS_v2. Estos chips y sistemas están concebidos para ejecutar tareas de visión a muy alta velocidad y con consumos de potencia moderados. Los sistemas resultantes son, además, compactos y por lo tanto ventajosos en términos del factor SWaP cuando se los compara con arquitecturas convencionales formadas por sensores de imágenes convencionales seguidos de procesadores digitales. La clave de estas ventajas en términos de SWaP y velocidad radica en el uso de sensores-procesadores, en lugar de meros sensores, en la interface de los sistemas de visión. Estos sensores-procesadores embeben procesadores programables de señal-mixta dentro del pixel y son capaces tanto de adquirir imágenes como de pre-procesarlas para extraer características, eliminar información redundante y reducir el número de datos que se transmiten fuera del sensor para su procesamiento ulterior. El núcleo de la tesis es el sensor-procesador Q-Eye, que se usa como interface en los sistemas Eye-RIS. Este sensor-procesador embebe una arquitectura de procesamiento formada por procesadores de señal-mixta distribuidos por pixel. Sus píxeles son por tanto estructuras multi-funcionales complejas. De hecho, son programables, incorporan memorias e interactúan con sus vecinos para realizar una variedad de operaciones, tales como: Convoluciones lineales con máscaras programables; Difusiones controladas por tiempo y nivel de señal, a través de un “grid” resistivo embebido en el plano focal; Aritmética de imágenes; Flujo de programación dependiente de la señal; Conversión entre los dominios de datos: imagen en escala de grises e imagen binaria; Operaciones lógicas en imágenes binarias; Operaciones morfológicas en imágenes binarias. etc. Con respecto a otros píxeles multi-función y sensores-procesadores anteriores, el Q-Eye reporta entre otras las siguientes ventajas: Mayor calidad de la imagen y mejores prestaciones de las funcionalidades embebidas en el chip; Mayor velocidad de operación y mejor gestión de la energía disponible; Mayor versatilidad para integración en sistemas de visión industrial. De hecho, los sistemas Eye-RIS son los primeros sistemas de visión industriales dotados de las siguientes características: Procesamiento paralelo distribuido y progresivo; Procesadores de señal-mixta fiables, robustos y con errores controlados; Programabilidad distribuida. La Tesis incluye descripciones detalladas de la arquitectura y los circuitos usados en el pixel del Q-Eye, del propio chip Q-Eye y de los sistemas de visión construidos en base a este chip. Se incluyen también ejemplos de los distintos chips en operaciónThis Thesis presents architectures, circuits and chips for the implementation of CMOS VISION SENSORS with embedded parallel processing. The Thesis reports two chips, namely: Q-eye chip; Eye-RIS_VSoC chip, and two vision systems realized by using these chips and some additional “off-chip” circuitry, such as FPGAs. These vision systems are: Eye-RIS_v1 system; Eye-RIS_v2 system. The chips and systems reported in the Thesis are conceived to perform vision tasks at very high speed and with moderate power consumption. The proposed vision systems are also compact and advantageous in terms of SWaP factors as compared with conventional architectures consisting of standard image sensor followed by digital processors. The key of these advantages in terms of SWaP and speed lies in the use of sensors-processors, rather than mere sensors, in the front-end interface of vision systems. These sensors-processors embed mixed-signal programmable processors inside the pixel. Therefore, they are able to acquire images and process them to extract the features, removing the redundant information and reducing the data throughput for later processing. The core of the Thesis is the sensor-processor Q-Eye, which is used as front-end in the Eye-RIS systems. This sensor-processor embeds a processing architecture composed by mixed-signal processors distributed per pixel. Then, its pixels are complex multi-functional structures. In fact, they are programmable, incorporate memories and interact with its neighbors in order to carry out a set of operations, including: Linear convolutions with programmable linear masks; Time- and signal-controlled diffusions (by means of an embedded resistive grid); Image arithmetic; Signal-dependent data scheduling; Gray-scale to binary transformation; Logic operation on binary images; Mathematical morphology on binary images, etc. As compared with previous multi-function pixels and sensors-processors, the Q-Eye brings among other the following advantages: Higher image quality and better performances of functionalities embedded on chip; Higher operation speed and better management of energy budget; More versatility for integration in industrial vision systems. In fact, the Eye-RIS systems are the first industrial vision systems equipped with the following characteristics: Parallel distributed and progressive processing; Reliable, robust mixed-signal processors with handled errors; Distributed programmability. This Thesis includes detailed descriptions of architecture and circuits used in the Q-Eye pixel, in the Q-Eye chip itself and in the vision systems developed based on this chip. Also, several examples of chips and systems in operation are presented

    NASA Space Engineering Research Center Symposium on VLSI Design

    Get PDF
    The NASA Space Engineering Research Center (SERC) is proud to offer, at its second symposium on VLSI design, presentations by an outstanding set of individuals from national laboratories and the electronics industry. These featured speakers share insights into next generation advances that will serve as a basis for future VLSI design. Questions of reliability in the space environment along with new directions in CAD and design are addressed by the featured speakers

    JellyNet: The convolutional neural network jellyfish bloom detector

    Get PDF
    Coastal industries face disruption on a global scale due to the threat of large blooms of jellyfish. They can decimate coastal fisheries and clog the water intake systems of desalination and nuclear power plants. This can lead to losses of revenue and power output. This paper presents JellyNet: a convolutional neural network (CNN) jellyfish bloom detection model trained on high resolution remote sensing imagery collected by unmanned aerial vehicles (UAVs). JellyNet provides the detection capability for an early (6–8 h) bloom warning system. 1539 images were collected from flights at 2 locations: Croabh Haven, UK and Pruth Bay, Canada. The training/test dataset was manually labelled, and split into two classes: ‘Bloom present’ and ‘No bloom present’. 500 × 500 pixel images were used to increase fine-grained pattern detection of the jellyfish blooms. Model testing was completed using a 75/25% training/test split with hyperparameters selected prior to model training using a held-out validation dataset. Transfer learning using VGG-16 architecture, and a jellyfish bloom specific binary classifier surpassed an accuracy of 90%. Test model performance peaked at 97.5% accuracy. This paper exhibits the first example of a high resolution, multi-sensor jellyfish bloom detection capability, with integrated robustness from two oceans to tackle real world detection challenges
    corecore