700 research outputs found

    CMOS IMAGE SENSORS FOR LAB-ON-A-CHIP MICROSYSTEM DESIGN

    Get PDF
    The work described herein serves as a foundation for the development of CMOS imaging in lab-on-a-chip microsystems. Lab-on-a-chip (LOC) systems attempt to emulate the functionality of a cell biology lab by incorporating multiple sensing modalidites into a single microscale system. LOC are applicable to drug development, implantable sensors, cell-based bio-chemical detectors and radiation detectors. The common theme across these systems is achieving performance under severe resource constraints including noise, bandwidth, power and size. The contributions of this work are in the areas of two core lab-on-a-chip imaging functions: object detection and optical measurements

    Applications of Silicon Retinas: from Neuroscience to Computer Vision

    Full text link
    Traditional visual sensor technology is firmly rooted in the concept of sequences of image frames. The sequence of stroboscopic images in these "frame cameras" is very different compared to the information running from the retina to the visual cortex. While conventional cameras have improved in the direction of smaller pixels and higher frame rates, the basics of image acquisition have remained the same. Event-based vision sensors were originally known as "silicon retinas" but are now widely called "event cameras." They are a new type of vision sensors that take inspiration from the mechanisms developed by nature for the mammalian retina and suggest a different way of perceiving the world. As in the neural system, the sensed information is encoded in a train of spikes, or so-called events, comparable to the action potential generated in the nerve. Event-based sensors produce sparse and asynchronous output that represents in- formative changes in the scene. These sensors have advantages in terms of fast response, low latency, high dynamic range, and sparse output. All these char- acteristics are appealing for computer vision and robotic applications, increasing the interest in this kind of sensor. However, since the sensor’s output is very dif- ferent, algorithms applied for frames need to be rethought and re-adapted. This thesis focuses on several applications of event cameras in scientific scenarios. It aims to identify where they can make the difference compared to frame cam- eras. The presented applications use the Dynamic Vision Sensor (event camera developed by the Sensors Group of the Institute of Neuroinformatics, University of Zurich and ETH). To explore some applications in more extreme situations, the first chapters of the thesis focus on the characterization of several advanced versions of the standard DVS. The low light condition represents a challenging situation for every vision sensor. Taking inspiration from standard Complementary Metal Oxide Semiconductor (CMOS) technology, the DVS pixel performances in a low light scenario can be improved, increasing sensitivity and quantum efficiency, by using back-side illumination. This thesis characterizes the so-called Back Side Illumination DAVIS (BSI DAVIS) camera and shows results from its application in calcium imaging of neural activity. The BSI DAVIS has shown better performance in the low light scene due to its high Quantum Efficiency (QE) of 93% and proved to be the best type of technology for microscopy application. The BSI DAVIS allows detecting fast dynamic changes in neural fluorescent imaging using the green fluorescent calcium indicator GCaMP6f. Event camera advances have pushed the exploration of event-based cameras in computer vision tasks. Chapters of this thesis focus on two of the most active research areas in computer vision: human pose estimation and hand gesture classification. Both chapters report the datasets collected to achieve the task, fulfilling the continuous need for data for this kind of new technology. The Dynamic Vision Sensor Human Pose dataset (DHP19) is an extensive collection of 33 whole-body human actions from 17 subjects. The chapter presents the first benchmark neural network model for 3D pose estimation using DHP19. The network archives a mean error of less than 8 mm in the 3D space, which is comparable with frame-based Human Pose Estimation (HPE) methods using frames. The gesture classification chapter reports an application running on a mobile device and explores future developments in the direction of embedded portable low power devices for online processing. The sparse output from the sensor suggests using a small model with a reduced number of parameters and low power consumption. The thesis also describes pilot results from two other scientific imaging applica- tions for raindrop size measurement and laser speckle analysis presented in the appendices

    CMOS pixel sensor development: a fast read-out architecture with integrated zero suppression

    Get PDF
    International audienceCMOS Monolithic Active Pixel Sensors (MAPS) have demonstrated their strong potential for tracking devices, particularly for flavour tagging. They are foreseen to equip several vertex detectors and beam telescopes. Most applications require high read-out speed, which imposes sensors to feature digital output with integrated zero suppression. The most recent development of MAPS at IPHC and IRFU addressing this issue will be reviewed. The design architecture, combining pixel array, column-level discriminators and zero suppression circuits, will be presented. Each pixel features a preamplifier and a correlated double sampling (CDS) micro-circuit reducing the temporal and fixed pattern noises. The sensor is fully programmable and can be monitored. It will equip experimental apparatus starting data taking in 2009/2010

    Low-power CMOS digital-pixel Imagers for high-speed uncooled PbSe IR applications

    Get PDF
    This PhD dissertation describes the research and development of a new low-cost medium wavelength infrared MWIR monolithic imager technology for high-speed uncooled industrial applications. It takes the baton on the latest technological advances in the field of vapour phase deposition (VPD) PbSe-based medium wavelength IR (MWIR) detection accomplished by the industrial partner NIT S.L., adding fundamental knowledge on the investigation of novel VLSI analog and mixed-signal design techniques at circuit and system levels for the development of the readout integrated device attached to the detector. The work supports on the hypothesis that, by the use of the preceding design techniques, current standard inexpensive CMOS technologies fulfill all operational requirements of the VPD PbSe detector in terms of connectivity, reliability, functionality and scalability to integrate the device. The resulting monolithic PbSe-CMOS camera must consume very low power, operate at kHz frequencies, exhibit good uniformity and fit the CMOS read-out active pixels in the compact pitch of the focal plane, all while addressing the particular characteristics of the MWIR detector: high dark-to-signal ratios, large input parasitic capacitance values and remarkable mismatching in PbSe integration. In order to achieve these demands, this thesis proposes null inter-pixel crosstalk vision sensor architectures based on a digital-only focal plane array (FPA) of configurable pixel sensors. Each digital pixel sensor (DPS) cell is equipped with fast communication modules, self-biasing, offset cancellation, analog-to-digital converter (ADC) and fixed pattern noise (FPN) correction. In-pixel power consumption is minimized by the use of comprehensive MOSFET subthreshold operation. The main aim is to potentiate the integration of PbSe-based infra-red (IR)-image sensing technologies so as to widen its use, not only in distinct scenarios, but also at different stages of PbSe-CMOS integration maturity. For this purpose, we posit to investigate a comprehensive set of functional blocks distributed in two parallel approaches: • Frame-based “Smart” MWIR imaging based on new DPS circuit topologies with gain and offset FPN correction capabilities. This research line exploits the detector pitch to offer fully-digital programmability at pixel level and complete functionality with input parasitic capacitance compensation and internal frame memory. • Frame-free “Compact”-pitch MWIR vision based on a novel DPS lossless analog integrator and configurable temporal difference, combined with asynchronous communication protocols inside the focal plane. This strategy is conceived to allow extensive pitch compaction and readout speed increase by the suppression of in-pixel digital filtering, and the use of dynamic bandwidth allocation in each pixel of the FPA. In order make the electrical validation of first prototypes independent of the expensive PbSe deposition processes at wafer level, investigation is extended as well to the development of affordable sensor emulation strategies and integrated test platforms specifically oriented to image read-out integrated circuits. DPS cells, imagers and test chips have been fabricated and characterized in standard 0.15μm 1P6M, 0.35μm 2P4M and 2.5μm 2P1M CMOS technologies, all as part of research projects with industrial partnership. The research has led to the first high-speed uncooled frame-based IR quantum imager monolithically fabricated in a standard VLSI CMOS technology, and has given rise to the Tachyon series [1], a new line of commercial IR cameras used in real-time industrial, environmental and transportation control systems. The frame-free architectures investigated in this work represent a firm step forward to push further pixel pitch and system bandwidth up to the limits imposed by the evolving PbSe detector in future generations of the device.La present tesi doctoral descriu la recerca i el desenvolupament d'una nova tecnologia monolítica d'imatgeria infraroja de longitud d'ona mitja (MWIR), no refrigerada i de baix cost, per a usos industrials d'alta velocitat. El treball pren el relleu dels últims avenços assolits pel soci industrial NIT S.L. en el camp dels detectors MWIR de PbSe depositats en fase vapor (VPD), afegint-hi coneixement fonamental en la investigació de noves tècniques de disseny de circuits VLSI analògics i mixtes pel desenvolupament del dispositiu integrat de lectura unit al detector pixelat. Es parteix de la hipòtesi que, mitjançant l'ús de les esmentades tècniques de disseny, les tecnologies CMOS estàndard satisfan tots els requeriments operacionals del detector VPD PbSe respecte a connectivitat, fiabilitat, funcionalitat i escalabilitat per integrar de forma econòmica el dispositiu. La càmera PbSe-CMOS resultant ha de consumir molt baixa potència, operar a freqüències de kHz, exhibir bona uniformitat, i encabir els píxels actius CMOS de lectura en el pitch compacte del pla focal de la imatge, tot atenent a les particulars característiques del detector: altes relacions de corrent d'obscuritat a senyal, elevats valors de capacitat paràsita a l'entrada i dispersions importants en el procés de fabricació. Amb la finalitat de complir amb els requisits previs, es proposen arquitectures de sensors de visió de molt baix acoblament interpíxel basades en l'ús d'una matriu de pla focal (FPA) de píxels actius exclusivament digitals. Cada píxel sensor digital (DPS) està equipat amb mòduls de comunicació d'alta velocitat, autopolarització, cancel·lació de l'offset, conversió analògica-digital (ADC) i correcció del soroll de patró fixe (FPN). El consum en cada cel·la es minimitza fent un ús exhaustiu del MOSFET operant en subllindar. L'objectiu últim és potenciar la integració de les tecnologies de sensat d'imatge infraroja (IR) basades en PbSe per expandir-ne el seu ús, no només a diferents escenaris, sinó també en diferents estadis de maduresa de la integració PbSe-CMOS. En aquest sentit, es proposa investigar un conjunt complet de blocs funcionals distribuïts en dos enfocs paral·lels: - Dispositius d'imatgeria MWIR "Smart" basats en frames utilitzant noves topologies de circuit DPS amb correcció de l'FPN en guany i offset. Aquesta línia de recerca exprimeix el pitch del detector per oferir una programabilitat completament digital a nivell de píxel i plena funcionalitat amb compensació de la capacitat paràsita d'entrada i memòria interna de fotograma. - Dispositius de visió MWIR "Compact"-pitch "frame-free" en base a un novedós esquema d'integració analògica en el DPS i diferenciació temporal configurable, combinats amb protocols de comunicació asíncrons dins del pla focal. Aquesta estratègia es concep per permetre una alta compactació del pitch i un increment de la velocitat de lectura, mitjançant la supressió del filtrat digital intern i l'assignació dinàmica de l'ample de banda a cada píxel de l'FPA. Per tal d'independitzar la validació elèctrica dels primers prototips respecte a costosos processos de deposició del PbSe sensor a nivell d'oblia, la recerca s'amplia també al desenvolupament de noves estratègies d'emulació del detector d'IR i plataformes de test integrades especialment orientades a circuits integrats de lectura d'imatge. Cel·les DPS, dispositius d'imatge i xips de test s'han fabricat i caracteritzat, respectivament, en tecnologies CMOS estàndard 0.15 micres 1P6M, 0.35 micres 2P4M i 2.5 micres 2P1M, tots dins el marc de projectes de recerca amb socis industrials. Aquest treball ha conduït a la fabricació del primer dispositiu quàntic d'imatgeria IR d'alta velocitat, no refrigerat, basat en frames, i monolíticament fabricat en tecnologia VLSI CMOS estàndard, i ha donat lloc a Tachyon, una nova línia de càmeres IR comercials emprades en sistemes de control industrial, mediambiental i de transport en temps real.Postprint (published version

    CMOS Sensors for Time-Resolved Active Imaging

    Full text link
    In the past decades, time-resolved imaging such as fluorescence lifetime or time-of-flight depth imaging has been extensively explored in biomedical and industrial fields because of its non-invasive characterization of material properties and remote sensing capability. Many studies have shown its potential and effectiveness in applications such as cancer detection and tissue diagnoses from fluorescence lifetime imaging, and gesture/motion sensing and geometry sensing from time-of-flight imaging. Nonetheless, time-resolved imaging has not been widely adopted due to the high cost of the system and performance limits. The research presented in this thesis focuses on the implementation of low-cost real-time time-resolved imaging systems. Two image sensing schemes are proposed and implemented to address the major limitations. First, we propose a single-shot fluorescence lifetime image sensors for high speed and high accuracy imaging. To achieve high accuracy, previous approaches repeat the measurement for multiple sampling, resulting in long measurement time. On the other hand, the proposed method achieves both high speed and accuracy at the same time by employing a pixel-level processor that takes and compresses the multiple samples within a single measurement time. The pixels in the sensor take multiple samples from the fluorescent optical signal in sub-nanosecond resolution and compute the average photon arrival time of the optical signal. Thanks to the multiple sampling of the signal, the measurement is insensitive to the shape or the pulse-width of excitation, providing better accuracy and pixel uniformity than conventional rapid lifetime determination (RLD) methods. The proposed single-shot image sensor also improves the imaging speed by orders of magnitude compared to other conventional center-of-mass methods (CMM). Second, we propose a 3-D camera with a background light suppression scheme which is adaptable to various lighting conditions. Previous 3-D cameras are not operable in outdoor conditions because they suffer from measurement errors and saturation problems under high background light illumination. We propose a reconfigurable architecture with column-parallel discrete-time background light cancellation circuit. Implementing the processor at the column level allows an order of magnitude reduction in pixel size as compared to existing pixel-level processors. The column-level approach also provides reconfigurable operation modes for optimal performance in all lighting conditions. For example, the sensor can operate at the best frame-rate and resolution without the presence of background light. If the background light saturates the sensor or increases the shot noise, the sensor can adjust the resolution and frame-rate by pixel binning and superresolution techniques. This effectively enhances the well capacity of the pixel to compensate for the increase shot noise, and speeds up the frame processing to handle the excessive background light. A fabricated prototype sensor can suppress the background light more than 100-klx while achieving a very small pixel size of 5.9μm.PHDElectrical EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttps://deepblue.lib.umich.edu/bitstream/2027.42/136950/1/eecho_1.pd

    Event-based Vision: A Survey

    Get PDF
    Event cameras are bio-inspired sensors that differ from conventional frame cameras: Instead of capturing images at a fixed rate, they asynchronously measure per-pixel brightness changes, and output a stream of events that encode the time, location and sign of the brightness changes. Event cameras offer attractive properties compared to traditional cameras: high temporal resolution (in the order of microseconds), very high dynamic range (140 dB vs. 60 dB), low power consumption, and high pixel bandwidth (on the order of kHz) resulting in reduced motion blur. Hence, event cameras have a large potential for robotics and computer vision in challenging scenarios for traditional cameras, such as low-latency, high speed, and high dynamic range. However, novel methods are required to process the unconventional output of these sensors in order to unlock their potential. This paper provides a comprehensive overview of the emerging field of event-based vision, with a focus on the applications and the algorithms developed to unlock the outstanding properties of event cameras. We present event cameras from their working principle, the actual sensors that are available and the tasks that they have been used for, from low-level vision (feature detection and tracking, optic flow, etc.) to high-level vision (reconstruction, segmentation, recognition). We also discuss the techniques developed to process events, including learning-based techniques, as well as specialized processors for these novel sensors, such as spiking neural networks. Additionally, we highlight the challenges that remain to be tackled and the opportunities that lie ahead in the search for a more efficient, bio-inspired way for machines to perceive and interact with the world

    Propuesta de arquitectura y circuitos para la mejora del rango dinámico de sistemas de visión en un chip diseñados en tecnologías CMOS profundamente submicrométrica

    Get PDF
    El trabajo presentado en esta tesis trata de proponer nuevas técnicas para la expansión del rango dinámico en sensores electrónicos de imagen. En este caso, hemos dirigido nuestros estudios hacia la posibilidad de proveer dicha funcionalidad en un solo chip. Esto es, sin necesitar ningún soporte externo de hardware o software, formando un tipo de sistema denominado Sistema de Visión en un Chip (VSoC). El rango dinámico de los sensores electrónicos de imagen se define como el cociente entre la máxima y la mínima iluminación medible. Para mejorar este factor surgen dos opciones. La primera, reducir la mínima luz medible mediante la disminución del ruido en el sensor de imagen. La segunda, incrementar la máxima luz medible mediante la extensión del límite de saturación del sensor. Cronológicamente, nuestra primera opción para mejorar el rango dinámico se basó en reducir el ruido. Varias opciones se pueden tomar para mejorar la figura de mérito de ruido del sistema: reducir el ruido usando una tecnología CIS o usar circuitos dedicados, tales como calibración o auto cero. Sin embargo, el uso de técnicas de circuitos implica limitaciones, las cuales sólo pueden ser resueltas mediante el uso de tecnologías no estándar que están especialmente diseñadas para este propósito. La tecnología CIS utilizada está dirigida a la mejora de la calidad y las posibilidades del proceso de fotosensado, tales como sensibilidad, ruido, permitir imagen a color, etcétera. Para estudiar las características de la tecnología en más detalle, se diseñó un chip de test, lo cual permite extraer las mejores opciones para futuros píxeles. No obstante, a pesar de un satisfactorio comportamiento general, las medidas referentes al rango dinámico indicaron que la mejora de este mediante sólo tecnología CIS es muy limitada. Es decir, la mejora de la corriente oscura del sensor no es suficiente para nuestro propósito. Para una mayor mejora del rango dinámico se deben incluir circuitos dentro del píxel. No obstante, las tecnologías CIS usualmente no permiten nada más que transistores NMOS al lado del fotosensor, lo cual implica una seria restricción en el circuito a usar. Como resultado, el diseño de un sensor de imagen con mejora del rango dinámico en tecnologías CIS fue desestimado en favor del uso de una tecnología estándar, la cual da más flexibilidad al diseño del píxel. En tecnologías estándar, es posible introducir una alta funcionalidad usando circuitos dentro del píxel, lo cual permite técnicas avanzadas para extender el límite de saturación de los sensores de imagen. Para este objetivo surgen dos opciones: adquisición lineal o compresiva. Si se realiza una adquisición lineal, se generarán una gran cantidad de datos por cada píxel. Como ejemplo, si el rango dinámico de la escena es de 120dB al menos se necesitarían 20-bits/píxel, log2(10120/20)=19.93, para la representación binaria de este rango dinámico. Esto necesitaría de amplios recursos para procesar esta gran cantidad de datos, y un gran ancho de banda para moverlos al circuito de procesamiento. Para evitar estos problemas, los sensores de imagen de alto rango dinámico usualmente optan por utilizar una adquisición compresiva de la luz. Por lo tanto, esto implica dos tareas a realizar: la captura y la compresión de la imagen. La captura de la imagen se realiza a nivel de píxel, en el dispositivo fotosensor, mientras que la compresión de la imagen puede ser realizada a nivel de píxel, de sistema, o mediante postprocesado externo. Usando el postprocesado, existe un campo de investigación que estudia la compresión de escenas de alto rango dinámico mientras se mantienen los detalles, produciendo un resultado apropiado para la percepción humana en monitores convencionales de bajo rango dinámico. Esto se denomina Mapeo de Tonos (Tone Mapping) y usualmente emplea solo 8-bits/píxel para las representaciones de imágenes, ya que éste es el estándar para las imágenes de bajo rango dinámico. Los píxeles de adquisición compresiva, por su parte, realizan una compresión que no es dependiente de la escena de alto rango dinámico a capturar, lo cual implica una baja compresión o pérdida de detalles y contraste. Para evitar estas desventajas, en este trabajo, se presenta un píxel de adquisición compresiva que aplica una técnica de mapeo de tonos que permite la captura de imágenes ya comprimidas de una forma optimizada para mantener los detalles y el contraste, produciendo una cantidad muy reducida de datos. Las técnicas de mapeo de tonos ejecutan normalmente postprocesamiento mediante software en un ordenador sobre imágenes capturadas sin compresión, las cuales contienen una gran cantidad de datos. Estas técnicas han pertenecido tradicionalmente al campo de los gráficos por ordenador debido a la gran cantidad de esfuerzo computacional que requieren. Sin embargo, hemos desarrollado un nuevo algoritmo de mapeo de tonos especialmente adaptado para aprovechar los circuitos dentro del píxel y que requiere un reducido esfuerzo de computación fuera de la matriz de píxeles, lo cual permite el desarrollo de un sistema de visión en un solo chip. El nuevo algoritmo de mapeo de tonos, el cual es un concepto matemático que puede ser simulado mediante software, se ha implementado también en un chip. Sin embargo, para esta implementación hardware en un chip son necesarias algunas adaptaciones y técnicas avanzadas de diseño, que constituyen en sí mismas otra de las contribuciones de este trabajo. Más aún, debido a la nueva funcionalidad, se han desarrollado modificaciones de los típicos métodos a usar para la caracterización y captura de imágenes

    A 1,000 Frames/s Programmable Vision Chip with Variable Resolution and Row-Pixel-Mixed Parallel Image Processors

    Get PDF
    A programmable vision chip with variable resolution and row-pixel-mixed parallel image processors is presented. The chip consists of a CMOS sensor array, with row-parallel 6-bit Algorithmic ADCs, row-parallel gray-scale image processors, pixel-parallel SIMD Processing Element (PE) array, and instruction controller. The resolution of the image in the chip is variable: high resolution for a focused area and low resolution for general view. It implements gray-scale and binary mathematical morphology algorithms in series to carry out low-level and mid-level image processing and sends out features of the image for various applications. It can perform image processing at over 1,000 frames/s (fps). A prototype chip with 64 × 64 pixels resolution and 6-bit gray-scale image is fabricated in 0.18 μm Standard CMOS process. The area size of chip is 1.5 mm × 3.5 mm. Each pixel size is 9.5 μm × 9.5 μm and each processing element size is 23 μm × 29 μm. The experiment results demonstrate that the chip can perform low-level and mid-level image processing and it can be applied in the real-time vision applications, such as high speed target tracking

    Pixellated radiation detectors for scientific applications

    Get PDF
    The work in this thesis is focused on characterisation and evaluation of two classes of science grade imaging radiation detectors. The first class is Monolithic Active Pixel Sensors (MAPS). The advances in CMOS fabrication technologies over the last four decades allowed MAPS to compete with Charge-Coupled Devices (CCD) in many applications. The technology also provides relatively inexpensive ways to tailor design to suit specific application needs. It is important to understand performance capabilities of new sensor designs through characterisation and optimisation of readout parameters. In this work three MAPSs were characterised. The first one - HEPAPS4 - designed for charged particle detection, with the potential technology application in the vertex detector for the International Linear Collider. The noise of the sensor was measured to be 35±5 e, which agrees well with simulated data. The dark current was found to be 175 pA/cm2. The SNR performance for minimum ionising particles detection was demonstrated to be 40. The sensor was also evaluated for indirect detection of thermal and fast neutrons using lithium and polyethylene converters. The technology performed well in such an application with an estimated fast neutron detection efficiency of ~0.01%. The second sensor characterised – Vanilla MAPS – was designed to evaluate new techniques for fast readout, small noise and reduced image lag. The system was capable to readout 150 full frames (520x520 pixels) per second; the sensor showed 14±4 e noise and decreased image lag. The dark current was found to be ~50 pA/cm2. The back-thinned version of the sensor demonstrated dramatic improvement in quantum efficiency from 0% to 20% at 220 nm. The third device is parametric sensor eLeNA. It features 14 test structure designed to evaluated noise reduction architectures. The most promising structures showed temporal noise values as low as 6 e and 20 e fixed pattern noise. Medipix as an example of the second class of imaging detectors - hybrid pixel detectors - was evaluated in two applications. It was used as the core element of the ATLAS radiation background monitoring system. The sensors were covered with neutron converters, which extended the number of radiation types that can be detected. X-ray calibration was performed, showing excellent tolerance of all 18 devices characterised. Detection efficiencies were estimated to be ~1% for thermal and ~0.1% for fast neutrons. The second application of Medipix was mass spectrometry. The detector was place in the focal plane of a prototype mass spectrometer. 2D representation of data allowed focusing correction of the ion beam. The system was capable to detect ions in the range of 5-25 keV. The detector characterisation with broad range of ions (from Cu to Pb) showed very good abundance agreement with table data
    corecore