178 research outputs found

    A CMOS Imager for Time-of-Flight and Photon Counting Based on Single Photon Avalanche Diodes and In-Pixel Time-to-Digital Converters

    Get PDF
    The design of a CMOS image sensor based on single-photon avalanche-diode (SPAD) array with in-pixel time-to-digital converter (TDC) is presented. The architecture of the imager is thoroughly described with emphasis on the characterization of the TDCs array. It is targeted for 3D image reconstruction. Several techniques as fast quenching/recharge circuit with tunable dead-time and time gated-operation are applied to reduce the noise and the power consumption. The chip was fabricated in a 0.18 m standard CMOS process and implements a double functionality: time-of-flight (ToF) estimation and photon counting. The imager features a programmable time resolution of the array of TDCs down to 145 ps. The measured accuracy of the minimum time bin is lower than 1LSB DNL and 1.7 LSB INL. The TDC jitter over the full dynamic range is less than 1 LSB.Peer reviewe

    A vision-based monitoring system for very early automatic detection of forest fires

    Get PDF
    Trabajo presentado a la "I International Conference on Modelling, Monitoring and Management of Forest Fires" celebrada en Toledo del 17 al 19 de Septiembre de 2008.International Conference on Modelling, Monitoring and Management of Forest Fires I This paper describes a system capable of detecting smoke at the very beginning of a forest fire with a precise spatial resolution. The system is based on a wireless vision sensor network. Each sensor monitors a small area of vegetation by running on-site a tailored vision algorithm to detect the presence of smoke. This algorithm examines chromaticity changes and spatio-temporal patterns in the scene that are characteristic of the smoke dynamics at early stages of propagation. Processing takes place at the sensor nodes and, if that is the case, an alarm signal is transmitted through the network along with a reference to the location of the triggered zone - without requiring complex GIS systems. This method improves the spatial resolution on the surveilled area and reduces the rate of false alarms. An energy efficient implementation of the sensor/processor devices is crucial as it determines the autonomy of the network nodes. At this point, we have developed an ad hoc vision algorithm, adapted to the nature of the problem, to be integrated into a single-chip sensor/processor. As a first step to validate the feasibility of the system, we applied the algorithm to smoke sequences recorded with commercial cameras at real-world scenarios that simulate the working conditions of the network nodes. The results obtained point to a very high reliability and robustness in the detection process.This work is funded by Junta de Andalucía (CICE) through project 2006-TIC-2352.Peer Reviewe

    A high dynamic range image sensor with linear response based on asynchronous event detection

    Get PDF
    This paper investigates the potential of an image sensor that combines event-based asynchronous outputs with conventional integration of photocurrents. Pixels voltages can be read out following a traditional approach with a source follower and analog-to-digital converter. Furthermore, pixels have circuitry to implement Pulse Density Modulation (PDM) sending out pulses with a frequency that is proportional to the photocurrent. Both read-out approaches operate simultaneously. Their information is combined to render high dynamic range images. In this paper, we explain the new vision sensor concept and we develop a theoretical analysis of the expected performance in standard AMS 0.18mm HV technology. Moreover, we provide a description of the vision sensor architecture and its main blocksPeer reviewe

    Performance evaluation and limitations of a vision system on a reconfigurable/programmable chip

    Get PDF
    This paper presents a survey of the characteristics of a vision system implemented in a reconfigurable/programmable chip (FPGA). System limitations and performance have been evaluated in order to derive specifications and constraints for further vision system synthesis. The system hereby reported has a conventional architecture. It consists in a central microprocessor (CPU) and the necessary peripheral elements for data acquisition, data storage and communications. It has been designed to stand alone, but a link to the programming and debugging tools running in a digital host (PC) is provided. In order to alleviate the computational load of the central microprocessor, we have designed a visual co-processor in charge of the low-level image processing tasks. It operates autonomously, commanded by the CPU, as another system peripheral. The complete system, without the sensor, has been implemented in a single reconfigurable chip as a SOPC. The incorporation of a dedicated visual co-processor, with specific circuitry for low-level image processing acceleration, enhances the system throughput outperforming conventional processing schemes. However, time-multiplexing of the dedicated hardware remains a limiting factor for the achievable peak computing power. We have quantified this effect and sketched possible solutions, like replication of the specific image processing hardware. © J.UCS.This work has been partially funded by project FIT-330100-2005-162 of the Spanish Ministry of Industry, Tourism and Commerce. The work of F. J. Sánchez-Fernández is supported by a grant of the Spanish Ministry of Education and Science.Peer Reviewe

    Digital processor array implementation aspects of a 3D multi-layer vision architecture

    Get PDF
    Trabajo presentado al 12th CNNA celebrado en Berkeley (USA) del 3 al 5 de febrero de 2010.Technological aspects of the 3D integration of a multilayer combined mixed-signal and digital sensor-processor array chip is described. The 3D integration raises the question of signal routing, power distribution, and heat dissipation, which aspects are considered systematically in the digital processor array layer as part of the multi layer structure. We have developed a linear programming based evaluation system to identify the proper architecture and its parameters.The work is supported by the Eutecus ONR-BAA Co. Num N00173-08-C-4005 VISCUBE project.Peer Reviewe

    Focal-plane generation of multi-resolution and multi-scale image representation for low-power vision applications

    Get PDF
    Comunicación presentada al "XXXVII Infrared Technology and Applications" celebrado en Orlando (USA) el 25 de Abril del 2011.Early vision stages represent a considerably heavy computational load. A huge amount of data needs to be processed under strict timing and power requirements. Conventional architectures usually fail to adhere to the specifications in many application fields, especially when autonomous vision-enabled devices are to be implemented, like in lightweight UAVs, robotics or wireless sensor networks. A bioinspired architectural approach can be employed consisting of a hierarchical division of the processing chain, conveying the highest computational demand to the focal plane. There, distributed processing elements, concurrent with the photosensitive devices, influence the image capture and generate a pre-processed representation of the scene where only the information of interest for subsequent stages remains. These focal-plane operators are implemented by analog building blocks, which may individually be a little imprecise, but as a whole render the appropriate image processing very efficiently. As a proof of concept, we have developed a 176x144-pixel smart CMOS imager that delivers lighter but enriched representations of the scene. Each pixel of the array contains a photosensor and some switches and weighted paths allowing reconfigurable resolution and spatial filtering. An energy-based image representation is also supported. These functionalities greatly simplify the operation of the subsequent digital processor implementing the high level logic of the vision algorithm. The resulting figures, 5.6m W@30fps, permit the integration of the smart image sensor with a wireless interface module (Imote2 from Memsic Corp.) for the development of vision-enabled WSN applications.This work is partially funded by the Andalusian regional government (Junta de Andalucía-CICE) through project 2006-TIC-2352 and the Spanish Ministry of Science (MICINN) through project TEC 2009-11812, co-funded by the European Regional Development Fund, and also supported by the Office of Naval Research (USA), through grant N000141110312.Peer Reviewe

    On-site forest fire smoke detection by low-power autonomous vision sensor

    Get PDF
    Trabajo presentado a la VI International Conference on Forest Fire Research celebrada en Coimbra (Portugal) del 15 al 18 de noviembre de 2010.Early detection plays a crucial role to prevent forest fires from spreading. Wireless vision sensor networks deployed throughout high-risk areas can perform fine-grained surveillance and thereby very early detection and precise location of forest fires. One of the fundamental requirements that need to be met at the network nodes is reliable low-power on-site image processing. It greatly simplifies the communication infrastructure of the network as only alarm signals instead of complete images are transmitted, anticipating thus a very competitive cost. As a first approximation to fulfill such a requirement, this paper reports the results achieved from field tests carried out in collaboration with the Andalusian Fire-Fighting Service (INFOCA). Two controlled burns of forest debris were realized (www.youtube.com/user/vmoteProject). Smoke was successfully detected on-site by the EyeRISTM v1.2, a general-purpose autonomous vision system, built by AnaFocus Ltd., in which a vision algorithm was programmed. No false alarm was triggered despite the significant motion other than smoke present in the scene. Finally, as a further step, we describe the preliminary laboratory results obtained from a prototype vision chip which implements, at very low energy cost, some image processing primitives oriented to environmental monitoring.This work is funded by CICE/JA and MICINN (Spain) through projects 2006-TIC-2352 and TEC2009-11812 respectively.Peer Reviewe

    Real-time single-exposure ROI-driven HDR adaptation based on focal-plane reconfiguration

    Get PDF
    Proc. SPIE 9400, Real-Time Image and Video Processing 2015This paper describes a prototype smart imager capable of adjusting the photo-integration time of multiple regions of interest concurrently, automatically and asynchronously with a single exposure period. The operation is supported by two interwined photo-diodes at pixel level and two digital registers at the periphery of the pixel matrix. These registers divide the focal-plane into independent regions within which automatic concurrent adjustment of the integration time takes place. At pixel level, one of the photo-diodes senses the pixel value itself whereas the other, in collaboration with its counterparts in a particular ROI, senses the mean illumination of that ROI. Additional circuitry interconnecting both photo-diodes enables the asynchronous adjustment of the integration time for each ROI according to this sensed illumination. The sensor can be recon gured on-the- y according to the requirements of a vision algorithm.España MINECO (FEDER) TEC2012-38921-C02 IPT-2011-1625-430000 IPC-20111009 CDTIJunta de Andalucía TIC 2338-2013 CEIC

    Offset-compensated comparator with full-input range in 150nm FDSOI CMOS-3d technology

    Get PDF
    Trabajo presentado al LASCAS celebrado en Iguazu (Brasil) del 24 al 26 de febrero de 2010.This paper addresses an offset-compensated comparator with full-input range in the 150nm FDSOI CMOS-3D technology from MIT- Lincoln Laboratory. The comparator discussed here makes part of a vision system. Its architecture is that of a self-biased inverter with dynamic offset correction. At simulation level, the comparator can reach a resolution of 0.1mV in an area of approximately 220μm2 with a time response of less than 40ns and a static power dissipation of 1.125μW.Peer Reviewe

    Xerotolerance: a new property in Exiguobacterium Genus

    Get PDF
    The highly xerotolerant bacterium classified as Exiguobacterium sp. Helios isolated from a solar panel in Spain showed a close relationship to Exiguobacterium sibiricum 255-15 isolated from Siberian permafrost. Xerotolerance has not been previously described as a characteristic of the extremely diverse Exiguobacterium genus, but both strains Helios and 255-15 showed higher xerotolerance than that described in the reference xerotolerant model strain Deinococcus radiodurans. Significant changes observed in the cell morphology after their desiccation suggests that the structure of cellular surface plays an important role in xerotolerance. Apart from its remarkable resistance to desiccation, Exiguobacterium sp. Helios strain shows several polyextremophilic characteristics that make it a promising chassis for biotechnological applications. Exiguobacterium sp. Helios cells produce nanoparticles of selenium in the presence of selenite linked to its resistance mechanism. Using the Lactobacillus plasmid pRCR12 that harbors a cherry marker, we have developed a transformation protocol for Exiguobacterium sp. Helios strain, being the first time that a bacterium of Exiguobacterium genus has been genetically modified. The comparison of Exiguobacterium sp. Helios and E. sibiricum 255-15 genomes revealed several interesting similarities and differences. Both strains contain a complete set of competence-related DNA transformation genes, suggesting that they might have natural competence, and an incomplete set of genes involved in sporulation; moreover, these strains not produce spores, suggesting that these genes might be involved in xerotolerance
    corecore