918 research outputs found

    Indirect test of M-S circuits using multiple specification band guarding

    Get PDF
    Testing analog and mixed-signal circuits is a costly task due to the required test time targets and high end technical resources. Indirect testing methods partially address these issues providing an efficient solution using easy to measure CUT information that correlates with circuit performances. In this work, a multiple specification band guarding technique is proposed as a method to achieve a test target of misclassified circuits. The acceptance/rejection test regions are encoded using octrees in the measurement space, where the band guarding factors precisely tune the test decision boundary according to the required test yield targets. The generated octree data structure serves to cluster the forthcoming circuits in the production testing phase by solely relying on indirect measurements. The combined use of octree based encoding and multiple specification band guarding makes the testing procedure fast, efficient and highly tunable. The proposed band guarding methodology has been applied to test a band-pass Butterworth filter under parametric variations. Promising simulation results are reported showing remarkable improvements when the multiple specification band guarding criterion is used.Peer ReviewedPostprint (author's final draft

    PIXEL-BASED CONVERSION GAIN SELECTION FOR QUAD-CELL CMOS IMAGE SENSORS WITH IMAGE CALIBRATION AND EXTENDED DYNAMIC RANGE

    Get PDF
    A Quad Bayer pixel structure is the current driving technology in developing high resolution image sensors with smaller pixel pitch. However, by reducing the pixel size the signal-to-noise ratio (SNR) and the dynamic range (DR) of an image sensor will be compromised. One of the means to improve the SNR and DR for a quad-cell image sensor or other type of complementary metal-oxide semiconductor (CMOS) image sensor with tiny pixels is to use a dual conversion gain (DCG) approach in a pixel structure where the conversion gain (CG) selection is adaptive with scene illumination and enhances the SNR and DR. However, current DCG image sensors use the same CG for all of the pixels within one frame (or one image) to obviate nonuniformity pattern artifacts that would appear within an image (because of the pixel dark offset values and also the light response that may be changed with a CG). To address the challenge that was described above, techniques are presented herein that support two-pixel readout mechanisms and image calibration means for a full resolution (i.e., remosaicing) mode of quad-cell image sensors that enable each individual pixel’s gain to be selected independently from the remaining pixels. Such unique readout mechanisms sense the exposure level of each pixel, set the conversion gain of each pixel individually in real-time, and extend analog-to-digital convertor (ADC) bit depth by adding gain bits to achieve the highest DR and SNR. Further, each pixel may be characterized by a unique dark and flat field calibration for each CG setting. Still further, dark and flat field calibrations may be applied to correct any nonuniformity pattern image artifacts that may result from the use of multiple CGs within a single image. Aspects of the presented techniques support image calibration and correction activities. Importantly, the presented techniques may be extended for all types of image sensors

    CMOS-3D smart imager architectures for feature detection

    Get PDF
    This paper reports a multi-layered smart image sensor architecture for feature extraction based on detection of interest points. The architecture is conceived for 3-D integrated circuit technologies consisting of two layers (tiers) plus memory. The top tier includes sensing and processing circuitry aimed to perform Gaussian filtering and generate Gaussian pyramids in fully concurrent way. The circuitry in this tier operates in mixed-signal domain. It embeds in-pixel correlated double sampling, a switched-capacitor network for Gaussian pyramid generation, analog memories and a comparator for in-pixel analog-to-digital conversion. This tier can be further split into two for improved resolution; one containing the sensors and another containing a capacitor per sensor plus the mixed-signal processing circuitry. Regarding the bottom tier, it embeds digital circuitry entitled for the calculation of Harris, Hessian, and difference-of-Gaussian detectors. The overall system can hence be configured by the user to detect interest points by using the algorithm out of these three better suited to practical applications. The paper describes the different kind of algorithms featured and the circuitry employed at top and bottom tiers. The Gaussian pyramid is implemented with a switched-capacitor network in less than 50 μs, outperforming more conventional solutions.Xunta de Galicia 10PXIB206037PRMinisterio de Ciencia e Innovación TEC2009-12686, IPT-2011-1625-430000Office of Naval Research N00014111031

    Advances on CMOS image sensors

    Get PDF
    This paper offers an introduction to the technological advances of image sensors designed using complementary metal–oxide–semiconductor (CMOS) processes along the last decades. We review some of those technological advances and examine potential disruptive growth directions for CMOS image sensors and proposed ways to achieve them. Those advances include breakthroughs on image quality such as resolution, capture speed, light sensitivity and color detection and advances on the computational imaging. The current trend is to push the innovation efforts even further as the market requires higher resolution, higher speed, lower power consumption and, mainly, lower cost sensors. Although CMOS image sensors are currently used in several different applications from consumer to defense to medical diagnosis, product differentiation is becoming both a requirement and a difficult goal for any image sensor manufacturer. The unique properties of CMOS process allows the integration of several signal processing techniques and are driving the impressive advancement of the computational imaging. With this paper, we offer a very comprehensive review of methods, techniques, designs and fabrication of CMOS image sensors that have impacted or might will impact the images sensor applications and markets

    Semi-Supervised Deep Learning for Microcontroller Performance Screening

    Get PDF
    In safety-critical applications, microcontrollers must satisfy strict quality constraints and performances in terms of F_max (the maximum operating frequency). Data extracted from on-chip ring oscillators (ROs) can model the F_max of integrated circuits using machine learning models. Those models are suitable for the performance screening process. Acquiring data from the ROs is a fast process that leads to many unlabeled data. Contrarily, the labeling phase (i.e., acquiring F_max) is a time-consuming and costly task, that leads to a small set of labeled data. This paper presents deep-learning-based methodologies to cope with the low number of labeled data in microcontroller performance screening. We propose a method that takes advantage of the high number of unlabeled samples in a semi-supervised learning fashion. We derive deep feature extractor models that project data into higher dimensional spaces and use the data feature embedding to face the performance prediction problem with simple linear regression. Experiments showed that the proposed models outperformed state-of-the-art methodologies in terms of prediction error and permitted us to use a significantly smaller number of devices to be characterized, thus reducing the time needed to build ML models by a factor of six with respect to baseline approaches

    Low Power Depth Estimation of Rigid Objects for Time-of-Flight Imaging

    Full text link
    Depth sensing is useful in a variety of applications that range from augmented reality to robotics. Time-of-flight (TOF) cameras are appealing because they obtain dense depth measurements with minimal latency. However, for many battery-powered devices, the illumination source of a TOF camera is power hungry and can limit the battery life of the device. To address this issue, we present an algorithm that lowers the power for depth sensing by reducing the usage of the TOF camera and estimating depth maps using concurrently collected images. Our technique also adaptively controls the TOF camera and enables it when an accurate depth map cannot be estimated. To ensure that the overall system power for depth sensing is reduced, we design our algorithm to run on a low power embedded platform, where it outputs 640x480 depth maps at 30 frames per second. We evaluate our approach on several RGB-D datasets, where it produces depth maps with an overall mean relative error of 0.96% and reduces the usage of the TOF camera by 85%. When used with commercial TOF cameras, we estimate that our algorithm can lower the total power for depth sensing by up to 73%

    Neural Network Methods for Radiation Detectors and Imaging

    Full text link
    Recent advances in image data processing through machine learning and especially deep neural networks (DNNs) allow for new optimization and performance-enhancement schemes for radiation detectors and imaging hardware through data-endowed artificial intelligence. We give an overview of data generation at photon sources, deep learning-based methods for image processing tasks, and hardware solutions for deep learning acceleration. Most existing deep learning approaches are trained offline, typically using large amounts of computational resources. However, once trained, DNNs can achieve fast inference speeds and can be deployed to edge devices. A new trend is edge computing with less energy consumption (hundreds of watts or less) and real-time analysis potential. While popularly used for edge computing, electronic-based hardware accelerators ranging from general purpose processors such as central processing units (CPUs) to application-specific integrated circuits (ASICs) are constantly reaching performance limits in latency, energy consumption, and other physical constraints. These limits give rise to next-generation analog neuromorhpic hardware platforms, such as optical neural networks (ONNs), for high parallel, low latency, and low energy computing to boost deep learning acceleration

    A software controlled voltage tuning system using multi-purpose ring oscillators

    Full text link
    This paper presents a novel software driven voltage tuning method that utilises multi-purpose Ring Oscillators (ROs) to provide process variation and environment sensitive energy reductions. The proposed technique enables voltage tuning based on the observed frequency of the ROs, taken as a representation of the device speed and used to estimate a safe minimum operating voltage at a given core frequency. A conservative linear relationship between RO frequency and silicon speed is used to approximate the critical path of the processor. Using a multi-purpose RO not specifically implemented for critical path characterisation is a unique approach to voltage tuning. The parameters governing the relationship between RO and silicon speed are obtained through the testing of a sample of processors from different wafer regions. These parameters can then be used on all devices of that model. The tuning method and software control framework is demonstrated on a sample of XMOS XS1-U8A-64 embedded microprocessors, yielding a dynamic power saving of up to 25% with no performance reduction and no negative impact on the real-time constraints of the embedded software running on the processor
    corecore