131 research outputs found

    Ultra-low noise, high-frame rate readout design for a 3D-stacked CMOS image sensor

    Get PDF
    Due to the switch from CCD to CMOS technology, CMOS based image sensors have become smaller, cheaper, faster, and have recently outclassed CCDs in terms of image quality. Apart from the extensive set of applications requiring image sensors, the next technological breakthrough in imaging would be to consolidate and completely shift the conventional CMOS image sensor technology to the 3D-stacked technology. Stacking is recent and an innovative technology in the imaging field, allowing multiple silicon tiers with different functions to be stacked on top of each other. The technology allows for an extreme parallelism of the pixel readout circuitry. Furthermore, the readout is placed underneath the pixel array on a 3D-stacked image sensor, and the parallelism of the readout can remain constant at any spatial resolution of the sensors, allowing extreme low noise and a high-frame rate (design) at virtually any sensor array resolution. The objective of this work is the design of ultra-low noise readout circuits meant for 3D-stacked image sensors, structured with parallel readout circuitries. The readout circuit’s key requirements are low noise, speed, low-area (for higher parallelism), and low power. A CMOS imaging review is presented through a short historical background, followed by the description of the motivation, the research goals, and the work contributions. The fundamentals of CMOS image sensors are addressed, as a part of highlighting the typical image sensor features, the essential building blocks, types of operation, as well as their physical characteristics and their evaluation metrics. Following up on this, the document pays attention to the readout circuit’s noise theory and the column converters theory, to identify possible pitfalls to obtain sub-electron noise imagers. Lastly, the fabricated test CIS device performances are reported along with conjectures and conclusions, ending this thesis with the 3D-stacked subject issues and the future work. A part of the developed research work is located in the Appendices.Devido Ă  mudança da tecnologia CCD para CMOS, os sensores de imagem em CMOS tornam se mais pequenos, mais baratos, mais rĂĄpidos, e mais recentemente, ultrapassaram os sensores CCD no que respeita Ă  qualidade de imagem. Para alĂ©m do vasto conjunto de aplicaçÔes que requerem sensores de imagem, o prĂłximo salto tecnolĂłgico no ramo dos sensores de imagem Ă© o de mudar completamente da tecnologia de sensores de imagem CMOS convencional para a tecnologia “3D-stacked”. O empilhamento de chips Ă© relativamente recente e Ă© uma tecnologia inovadora no campo dos sensores de imagem, permitindo vĂĄrios planos de silĂ­cio com diferentes funçÔes poderem ser empilhados uns sobre os outros. Esta tecnologia permite portanto, um paralelismo extremo na leitura dos sinais vindos da matriz de pĂ­xeis. AlĂ©m disso, num sensor de imagem de planos de silĂ­cio empilhados, os circuitos de leitura estĂŁo posicionados debaixo da matriz de pĂ­xeis, sendo que dessa forma, o paralelismo pode manter-se constante para qualquer resolução espacial, permitindo assim atingir um extremo baixo ruĂ­do e um alto debito de imagens, virtualmente para qualquer resolução desejada. O objetivo deste trabalho Ă© o de desenhar circuitos de leitura de coluna de muito baixo ruĂ­do, planeados para serem empregues em sensores de imagem “3D-stacked” com estruturas altamente paralelizadas. Os requisitos chave para os circuitos de leitura sĂŁo de baixo ruĂ­do, rapidez e pouca ĂĄrea utilizada, de forma a obter-se o melhor rĂĄcio. Uma breve revisĂŁo histĂłrica dos sensores de imagem CMOS Ă© apresentada, seguida da motivação, dos objetivos e das contribuiçÔes feitas. Os fundamentos dos sensores de imagem CMOS sĂŁo tambĂ©m abordados para expor as suas caracterĂ­sticas, os blocos essenciais, os tipos de operação, assim como as suas caracterĂ­sticas fĂ­sicas e suas mĂ©tricas de avaliação. No seguimento disto, especial atenção Ă© dada Ă  teoria subjacente ao ruĂ­do inerente dos circuitos de leitura e dos conversores de coluna, servindo para identificar os possĂ­veis aspetos que dificultem atingir a tĂŁo desejada performance de muito baixo ruĂ­do. Por fim, os resultados experimentais do sensor desenvolvido sĂŁo apresentados junto com possĂ­veis conjeturas e respetivas conclusĂ”es, terminando o documento com o assunto de empilhamento vertical de camadas de silĂ­cio, junto com o possĂ­vel trabalho futuro

    CMOS SPAD-based image sensor for single photon counting and time of flight imaging

    Get PDF
    The facility to capture the arrival of a single photon, is the fundamental limit to the detection of quantised electromagnetic radiation. An image sensor capable of capturing a picture with this ultimate optical and temporal precision is the pinnacle of photo-sensing. The creation of high spatial resolution, single photon sensitive, and time-resolved image sensors in complementary metal oxide semiconductor (CMOS) technology offers numerous benefits in a wide field of applications. These CMOS devices will be suitable to replace high sensitivity charge-coupled device (CCD) technology (electron-multiplied or electron bombarded) with significantly lower cost and comparable performance in low light or high speed scenarios. For example, with temporal resolution in the order of nano and picoseconds, detailed three-dimensional (3D) pictures can be formed by measuring the time of flight (TOF) of a light pulse. High frame rate imaging of single photons can yield new capabilities in super-resolution microscopy. Also, the imaging of quantum effects such as the entanglement of photons may be realised. The goal of this research project is the development of such an image sensor by exploiting single photon avalanche diodes (SPAD) in advanced imaging-specific 130nm front side illuminated (FSI) CMOS technology. SPADs have three key combined advantages over other imaging technologies: single photon sensitivity, picosecond temporal resolution and the facility to be integrated in standard CMOS technology. Analogue techniques are employed to create an efficient and compact imager that is scalable to mega-pixel arrays. A SPAD-based image sensor is described with 320 by 240 pixels at a pitch of 8ÎŒm and an optical efficiency or fill-factor of 26.8%. Each pixel comprises a SPAD with a hybrid analogue counting and memory circuit that makes novel use of a low-power charge transfer amplifier. Global shutter single photon counting images are captured. These exhibit photon shot noise limited statistics with unprecedented low input-referred noise at an equivalent of 0.06 electrons. The CMOS image sensor (CIS) trends of shrinking pixels, increasing array sizes, decreasing read noise, fast readout and oversampled image formation are projected towards the formation of binary single photon imagers or quanta image sensors (QIS). In a binary digital image capture mode, the image sensor offers a look-ahead to the properties and performance of future QISs with 20,000 binary frames per second readout with a bit error rate of 1.7 x 10-3. The bit density, or cumulative binary intensity, against exposure performance of this image sensor is in the shape of the famous Hurter and Driffield densitometry curves of photographic film. Oversampled time-gated binary image capture is demonstrated, capturing 3D TOF images with 3.8cm precision in a 60cm range

    Smart Sensor Networks For Sensor-Neural Interface

    Get PDF
    One in every fifty Americans suffers from paralysis, and approximately 23% of paralysis cases are caused by spinal cord injury. To help the spinal cord injured gain functionality of their paralyzed or lost body parts, a sensor-neural-actuator system is commonly used. The system includes: 1) sensor nodes, 2) a central control unit, 3) the neural-computer interface and 4) actuators. This thesis focuses on a sensor-neural interface and presents the research related to circuits for the sensor-neural interface. In Chapter 2, three sensor designs are discussed, including a compressive sampling image sensor, an optical force sensor and a passive scattering force sensor. Chapter 3 discusses the design of the analog front-end circuit for the wireless sensor network system. A low-noise low-power analog front-end circuit in 0.5ÎŒm CMOS technology, a 12-bit 1MS/s successive approximation register (SAR) analog-to-digital converter (ADC) in 0.18ÎŒm CMOS process and a 6-bit asynchronous level-crossing ADC realized in 0.18ÎŒm CMOS process are presented. Chapter 4 shows the design of a low-power impulse-radio ultra-wide-band (IR-UWB) transceiver (TRx) that operates at a data rate of up to 10Mbps, with a power consumption of 4.9pJ/bit transmitted for the transmitter and 1.12nJ/bit received for the receiver. In Chapter 5, a wireless fully event-driven electrogoniometer is presented. The electrogoniometer is implemented using a pair of ultra-wide band (UWB) wireless smart sensor nodes interfacing with low power 3-axis accelerometers. The two smart sensor nodes are configured into a master node and a slave node, respectively. An experimental scenario data analysis shows higher than 90% reduction of the total data throughput using the proposed fully event-driven electrogoniometer to measure joint angle movements when compared with a synchronous Nyquist-rate sampling system. The main contribution of this thesis includes: 1) the sensor designs that emphasize power efficiency and data throughput efficiency; 2) the fully event-driven wireless sensor network system design that minimizes data throughput and optimizes power consumption

    CMOS Sensors for Time-Resolved Active Imaging

    Full text link
    In the past decades, time-resolved imaging such as fluorescence lifetime or time-of-flight depth imaging has been extensively explored in biomedical and industrial fields because of its non-invasive characterization of material properties and remote sensing capability. Many studies have shown its potential and effectiveness in applications such as cancer detection and tissue diagnoses from fluorescence lifetime imaging, and gesture/motion sensing and geometry sensing from time-of-flight imaging. Nonetheless, time-resolved imaging has not been widely adopted due to the high cost of the system and performance limits. The research presented in this thesis focuses on the implementation of low-cost real-time time-resolved imaging systems. Two image sensing schemes are proposed and implemented to address the major limitations. First, we propose a single-shot fluorescence lifetime image sensors for high speed and high accuracy imaging. To achieve high accuracy, previous approaches repeat the measurement for multiple sampling, resulting in long measurement time. On the other hand, the proposed method achieves both high speed and accuracy at the same time by employing a pixel-level processor that takes and compresses the multiple samples within a single measurement time. The pixels in the sensor take multiple samples from the fluorescent optical signal in sub-nanosecond resolution and compute the average photon arrival time of the optical signal. Thanks to the multiple sampling of the signal, the measurement is insensitive to the shape or the pulse-width of excitation, providing better accuracy and pixel uniformity than conventional rapid lifetime determination (RLD) methods. The proposed single-shot image sensor also improves the imaging speed by orders of magnitude compared to other conventional center-of-mass methods (CMM). Second, we propose a 3-D camera with a background light suppression scheme which is adaptable to various lighting conditions. Previous 3-D cameras are not operable in outdoor conditions because they suffer from measurement errors and saturation problems under high background light illumination. We propose a reconfigurable architecture with column-parallel discrete-time background light cancellation circuit. Implementing the processor at the column level allows an order of magnitude reduction in pixel size as compared to existing pixel-level processors. The column-level approach also provides reconfigurable operation modes for optimal performance in all lighting conditions. For example, the sensor can operate at the best frame-rate and resolution without the presence of background light. If the background light saturates the sensor or increases the shot noise, the sensor can adjust the resolution and frame-rate by pixel binning and superresolution techniques. This effectively enhances the well capacity of the pixel to compensate for the increase shot noise, and speeds up the frame processing to handle the excessive background light. A fabricated prototype sensor can suppress the background light more than 100-klx while achieving a very small pixel size of 5.9ÎŒm.PHDElectrical EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttps://deepblue.lib.umich.edu/bitstream/2027.42/136950/1/eecho_1.pd

    High-Speed Radhard Mega-Pixel CIS Camera for High-Energy Physics

    Full text link
    This dissertation describes the schematic design, physical layout implementation, system-level hardware with FPGA firmware design, and testing of a camera-on-a-chip with a novel high-speed CMOS image sensor (CIS) architecture developed for a mega-pixel array. The novel features of the design include an innovative quadruple column-parallel readout (QCPRO) scheme with rolling shutter that increases pixel rate, its ability to program the frame rate and to tolerate Total Ionizing Dose effects (TID). Two versions of the architecture, a small (128 x 1,024 pixels) and large (768 x 1,024 pixels) version were designed and fabricated with a custom layout that does not include library parts. The designs achieve a performance of 20 to 4,000 frames per second (fps) and they tolerate up to 125 krads of radiation exposure. The high-speed CIS architecture proposes and implements a creative quadruple column-parallel readout (QCPRO) scheme to achieve a maximum pixel rate, 10.485 gigapixels/s. The QCPRO scheme consists of four readout blocks per column and to complete four rows of pixels readout process at one line time. Each column-level readout block includes an analog time-interleaving (ATI) sampling circuit, a switched-capacitor programmable gain amplifier (SC-PGA), a 10-bit successive-approximation register (SAR) ADC, two 10-bit memory banks. The column-parallel SAR ADC is area-efficient to be laid out in half of one pixel pitch, 10 um. The analog ATI sampling circuit has two sample-and-hold circuits. Each sampling circuit can independently complete correlated double sampling (CDS) operation. Furthermore, to deliver over 10^10 pixel data in one second, a high-speed differential Scalable Low-Voltage Signaling (SLVS) transmitter for every 16 columns is designed to have 1 Gbps/ch at 0.4 V. Two memory banks provide a ping-pong operation: one connecting to the ADC for storing digital data and the other to the SLVS for delivering data to the off-chip FPGA. Therefore, the proposed CIS architecture can achieve 10,000 frames per second for a 1,024 x 1,024 pixel array. The floor plan of the proposed CIS architecture is symmetrical having one-half of pixel rows to read out on top, and the other half read out on the bottom of the pixel array. The rolling shutter feature with multi-lines readout in parallel and oversampling technique relaxes the image artifacts for capturing fast-moving objects. The CIS camera can provide complete digital input control and digital pixel data output. Many other components are designed and integrated into the proposed CMOS imager, including the Serial Peripheral Interface (SPI), bandgap reference, serializers, phase-locked loops (PLLs), and sequencers with configuration registers. Also, the proposed CIS can program the frame rate for wider applications by modifying three parameters: input clock frequency, the region of interest, and the counter size in the sequencer. The radiation hardening feature is achieved by using the combination of enclosed geometry technique and P-type guard-rings in the 0.18 um CMOS technology. The peripheral circuits use P-type guard-rings to cut the TID-induced leakage path between device to device. Each pixel cell is radiation tolerant by using enclosed layout transistors. The pinned photodiode is also used to get low dark current, and correlated double sampling to suppress pixel-level fixed-pattern noise and reset noise. The final pixel cell is laid out in 20 x 20 um^2. The total area of the pixel array is 2.56 x 20.28 mm^2 for low-resolution imager prototype and 15.36 x 20.28 mm^2 for high-resolution imager prototype. The entire CIS camera system is developed by the implementation of the hardware and FPGA firmware of the small-format prototype with 128 x 1,024 pixels and 754 pads in a 4.24 x 25.125 mm^2 die area. Different testing methods are also briefly described for different test purposes. Measurement results validate the functionalities of the readout path, sequencer, on-chip PLLs, and the SLVS transmitters. The programmable frame rate feature is also demonstrated by checking the digital control outputs from the sequencer at different frame rates. Furthermore, TID radiation tests proved the pixels can work under 125 krads radiation exposure

    Time interleaved counter analog to digital converters

    Get PDF
    The work explores extending time interleaving in A/D converters, by applying a high-level of parallelism to one of the slowest and simplest types of data-converters, the counter ADC. The motivation for the work is to realise high-performance re-configurable A/D converters for use in multi-standard and multi-PHY communication receivers with signal bandwidths in the 10s to 100s of MHz. The counter ADC requires only a comparator, a ramp signal, and a digital counter, where the comparator compares the sampled input against all possible quantisation levels sequentially. This work explores arranging counter ADCs in large time-interleaved arrays, building a Time Interleaved Counter (TIC) ADC. The key to realising a TIC ADC is distributed sampling and a global multi-phase ramp generator realised with a novel figure-of-8 rotating resistor ring. Furthermore Counter ADCs allow for re-configurability between effective sampling rate and resolution due to their sequential comparison of reference levels in conversion. A prototype TIC ADC of 128-channels was fabricated and measured in 0.13ÎŒm CMOS technology, where the same block can be configured to operate as a 7-bit 1GS/s, 8-bit 500MS/s, or 9-bit 250MS/s dataconverter. The ADC achieves a sub 400fJ/step FOM in all modes of configuration

    Pixels for focal-plane scale space generation and for high dynamic range imaging

    Get PDF
    Focal-plane processing allows for parallel processing throughout the entire pixel matrix, which can help increasing the speed of vision systems. The fabrication of circuits inside the pixel matrix increases the pixel pitch and reduces the fill factor, which leads to reduced image quality. To take advantage of the focal-plane processing capabilities and minimize image quality reduction, we first consider the inclusion of only two extra transistors in the pixel, allowing for scale space generation at the focal plane. We assess the conditions in which the proposed circuitry is advantageous. We perform a time and energy analysis of this approach in comparison to a digital solution. Considering that a SAR ADC per column is used and the clock frequency is equal to 5.6 MHz, the proposed analysis show that the focal-plane approach is 26 times faster if the digital solution uses 10 processing elements, and 49 times more energy-efficient. Another way of taking advantage of the focal-plane signal processing capability is by using focal-plane processing for increasing image quality itself, such as in the case of high dynamic range imaging pixels. This work also presents the design and study of a pixel that captures high dynamic range images by sensing the matrix average luminance, and then adjusting the integration time of each pixel according to the global average and to the local value of the pixel. This pixel was implemented considering small structural variations, such as different photodiode sizes for global average luminance measurement. Schematic and post-layout simulations were performed with the implemented pixel using an input image of 76 dB, presenting results with details in both dark and bright image areas.O processamento no plano focal de imageadores permite que a imagem capturada seja processada em paralelo por toda a matrix de pixels, caracterĂ­stica que pode aumentar a velocidade de sistemas de visĂŁo. Ao fabricar circuitos dentro da matrix de pixels, o tamanho do pixel aumenta e a razĂŁo entre ĂĄrea fotossensĂ­vel e a ĂĄrea total do pixel diminui, reduzindo a qualidade da imagem. Para utilizar as vantagens do processamento no plano focal e minimizar a redução da qualidade da imagem, a primeira parte da tese propĂ”e a inclusĂŁo de dois transistores no pixel, o que permite que o espaço de escalas da imagem capturada seja gerado. NĂłs entĂŁo avaliamos em quais condiçÔes o circuito proposto Ă© vantajoso. NĂłs analisamos o tempo de processamento e o consumo de energia dessa proposta em comparação com uma solução digital. Utilizando um conversor de aproximaçÔes sucessivas com frequĂȘncia de 5.6 MHz, a anĂĄlise proposta mostra que a abordagem no plano focal Ă© 26 vezes mais rĂĄpida que o circuito digital com 10 elementos de processamento, e consome 49 vezes menos energia. Outra maneira de utilizar processamento no plano focal consiste em aplicĂĄ-lo para melhorar a qualidade da imagem, como na captura de imagens em alta faixa dinĂąmica. Esta tese tambĂ©m apresenta o estudo e projeto de um pixel que realiza a captura de imagens em alta faixa dinĂąmica atravĂ©s do ajuste do tempo de integração de cada pixel utilizando a iluminação mĂ©dia e o valor do prĂłprio pixel. Esse pixel foi projetado considerando pequenas variaçÔes estruturais, como diferentes tamanhos do fotodiodo que realiza a captura do valor de iluminação mĂ©dio. SimulaçÔes de esquemĂĄtico e pĂłs-layout foram realizadas com o pixel projetado utilizando uma imagem com faixa dinĂąmica de 76 dB, apresentando resultados com detalhes tanto na parte clara como na parte escura da imagem

    ?????? ?????? ???????????? ?????? ???????????? ??????????????? ?????????????????? ??? ???????????????

    Get PDF
    Department of Electrical EngineeringA Sensor system is advanced along sensor technologies are developed. The performance improvement of sensor system can be expected by using the internet of things (IoT) communication technology and artificial neural network (ANN) for data processing and computation. Sensors or systems exchanged the data through this wireless connectivity, and various systems and applications are possible to implement by utilizing the advanced technologies. And the collected data is computed using by the ANN and the efficiency of system can be also improved. Gas monitoring system is widely need from the daily life to hazardous workplace. Harmful gas can cause a respiratory disease and some gas include cancer-causing component. Even though it may cause dangerous situation due to explosion. There are various kinds of hazardous gas and its characteristics that effect on human body are different each gas. The optimal design of gas monitoring system is necessary due to each gas has different criteria such as the permissible concentration and exposure time. Therefore, in this thesis, conventional sensor system configuration, operation, and limitation are described and gas monitoring system with wireless connectivity and neural network is proposed to improve the overall efficiency. As I already mentioned above, dangerous concentration and permissible exposure time are different depending on gas types. During the gas monitoring, gas concentration is lower than a permissible level in most of case. Thus, the gas monitoring is enough with low resolution for saving the power consumption in this situation. When detecting the gas, the high-resolution is required for the accurate concentration detecting. If the gas type is varied in the above situation, the amount of calculation increases exponentially. Therefore, in the conventional systems, target specifications are decided by the highest requirement in the whole situation, and it occurs increasing the cost and complexity of readout integrated circuit (ROIC) and system. In order to optimize the specification, the ANN and adaptive ROIC are utilized to compute the complex situation and huge data processing. Thus, gas monitoring system with learning-based algorithm is proposed to improve its efficiency. In order to optimize the operation depending on situation, dual-mode ROIC that monitoring mode and precision mode is implemented. If the present gas concentration is decided to safe, monitoring mode is operated with minimal detecting accuracy for saving the power consumption. The precision mode is switched when the high-resolution or hazardous situation are detected. The additional calibration circuits are necessary for the high-resolution implementation, and it has more power consumption and design complexity. A high-resolution Analog-to-digital converter (ADC) is kind of challenges to design with efficiency way. Therefore, in order to reduce the effective resolution of ADC and power consumption, zooming correlated double sampling (CDS) circuit and prediction successive approximation register (SAR) ADC are proposed for performance optimization into precision mode. A Microelectromechanical systems (MEMS) based gas sensor has high-integration and high sensitivity, but the calibration is needed to improve its low selectivity. Conventionally, principle component analysis (PCA) is used to classify the gas types, but this method has lower accuracy in some case and hard to verify in real-time. Alternatively, ANN is powerful algorithm to accurate sensing through collecting the data and training procedure and it can be verified the gas type and concentration in real-time. ROIC was fabricated in complementary metal-oxide-semiconductor (CMOS) 180-nm process and then the efficiency of the system with adaptive ROIC and ANN algorithm was experimentally verified into gas monitoring system prototype. Also, Bluetooth supports wireless connectivity to PC and mobile and pattern recognition and prediction code for SAR ADC is performed in MATLAB. Real-time gas information is monitored by Android-based application in smartphone. The dual-mode operation, optimization of performance and prediction code are adjusted with microcontroller unit (MCU). Monitoring mode is improved by x2.6 of figure-of-merits (FoM) that compared with previous resistive interface.clos
    • 

    corecore