12,767 research outputs found

    A Bio-Inspired Vision Sensor With Dual Operation and Readout Modes

    Get PDF
    This paper presents a novel event-based vision sensor with two operation modes: intensity mode and spatial contrast detection. They can be combined with two different readout approaches: pulse density modulation and time-to-first spike. The sensor is conceived to be a node of an smart camera network made up of several independent an autonomous nodes that send information to a central one. The user can toggle the operation and the readout modes with two control bits. The sensor has low latency (below 1 ms under average illumination conditions), low power consumption (19 mA), and reduced data flow, when detecting spatial contrast. A new approach to compute the spatial contrast based on inter-pixel event communication less prone to mismatch effects than diffusive networks is proposed. The sensor was fabricated in the standard AMS4M2P 0.35-um process. A detailed system-level description and experimental results are provided.Office of Naval Research (USA) N00014-14-1-0355Ministerio de Economía y Competitividad TEC2012- 38921-C02-02, P12-TIC-2338, IPT-2011-1625-43000

    Event-based Vision: A Survey

    Get PDF
    Event cameras are bio-inspired sensors that differ from conventional frame cameras: Instead of capturing images at a fixed rate, they asynchronously measure per-pixel brightness changes, and output a stream of events that encode the time, location and sign of the brightness changes. Event cameras offer attractive properties compared to traditional cameras: high temporal resolution (in the order of microseconds), very high dynamic range (140 dB vs. 60 dB), low power consumption, and high pixel bandwidth (on the order of kHz) resulting in reduced motion blur. Hence, event cameras have a large potential for robotics and computer vision in challenging scenarios for traditional cameras, such as low-latency, high speed, and high dynamic range. However, novel methods are required to process the unconventional output of these sensors in order to unlock their potential. This paper provides a comprehensive overview of the emerging field of event-based vision, with a focus on the applications and the algorithms developed to unlock the outstanding properties of event cameras. We present event cameras from their working principle, the actual sensors that are available and the tasks that they have been used for, from low-level vision (feature detection and tracking, optic flow, etc.) to high-level vision (reconstruction, segmentation, recognition). We also discuss the techniques developed to process events, including learning-based techniques, as well as specialized processors for these novel sensors, such as spiking neural networks. Additionally, we highlight the challenges that remain to be tackled and the opportunities that lie ahead in the search for a more efficient, bio-inspired way for machines to perceive and interact with the world

    A sub-mW IoT-endnode for always-on visual monitoring and smart triggering

    Full text link
    This work presents a fully-programmable Internet of Things (IoT) visual sensing node that targets sub-mW power consumption in always-on monitoring scenarios. The system features a spatial-contrast 128x64128\mathrm{x}64 binary pixel imager with focal-plane processing. The sensor, when working at its lowest power mode (10μW10\mu W at 10 fps), provides as output the number of changed pixels. Based on this information, a dedicated camera interface, implemented on a low-power FPGA, wakes up an ultra-low-power parallel processing unit to extract context-aware visual information. We evaluate the smart sensor on three always-on visual triggering application scenarios. Triggering accuracy comparable to RGB image sensors is achieved at nominal lighting conditions, while consuming an average power between 193μW193\mu W and 277μW277\mu W, depending on context activity. The digital sub-system is extremely flexible, thanks to a fully-programmable digital signal processing engine, but still achieves 19x lower power consumption compared to MCU-based cameras with significantly lower on-board computing capabilities.Comment: 11 pages, 9 figures, submitteted to IEEE IoT Journa

    Lightning Imaging Sensor (LIS) for the Earth Observing System

    Get PDF
    Not only are scientific objectives and instrument characteristics given of a calibrated optical LIS for the EOS but also for the Tropical Rainfall Measuring Mission (TRMM) which was designed to acquire and study the distribution and variability of total lightning on a global basis. The LIS can be traced to a lightning mapper sensor planned for flight on the GOES meteorological satellites. The LIS consists of a staring imager optimized to detect and locate lightning. The LIS will detect and locate lightning with storm scale resolution (i.e., 5 to 10 km) over a large region of the Earth's surface along the orbital track of the satellite, mark the time of occurrence of the lightning, and measure the radiant energy. The LIS will have a nearly uniform 90 pct. detection efficiency within the area viewed by the sensor, and will detect intracloud and cloud-to-ground discharges during day and night conditions. Also, the LIS will monitor individual storms and storm systems long enough to obtain a measure of the lightning flashing rate when they are within the field of view of the LIS. The LIS attributes include low cost, low weight and power, low data rate, and important science. The LIS will study the hydrological cycle, general circulation and sea surface temperature variations, along with examinations of the electrical coupling of thunderstorms with the ionosphere and magnetosphere, and observations and modeling of the global electric circuit

    A spatial contrast retina with on-chip calibration for neuromorphic spike-based AER vision systems

    Get PDF
    We present a 32 32 pixels contrast retina microchip that provides its output as an address event representation (AER) stream. Spatial contrast is computed as the ratio between pixel photocurrent and a local average between neighboring pixels obtained with a diffuser network. This current-based computation produces an important amount of mismatch between neighboring pixels, because the currents can be as low as a few pico-amperes. Consequently, a compact calibration circuitry has been included to trimm each pixel. Measurements show a reduction in mismatch standard deviation from 57% to 6.6% (indoor light). The paper describes the design of the pixel with its spatial contrast computation and calibration sections. About one third of pixel area is used for a 5-bit calibration circuit. Area of pixel is 58 m 56 m, while its current consumption is about 20 nA at 1-kHz event rate. Extensive experimental results are provided for a prototype fabricated in a standard 0.35- m CMOS process.Gobierno de España TIC2003-08164-C03-01, TEC2006-11730-C03-01European Union IST-2001-3412

    Flat-top TIRF illumination boosts DNA-PAINT imaging and quantification

    Get PDF
    Super-resolution (SR) techniques have extended the optical resolution down to a few nanometers. However, quantitative treatment of SR data remains challenging due to its complex dependence on a manifold of experimental parameters. Among the different SR variants, DNA-PAINT is relatively straightforward to implement, since it achieves the necessary 'blinking' without the use of rather complex optical or chemical activation schemes. However, it still suffers from image and quantification artifacts caused by inhomogeneous optical excitation. Here we demonstrate that several experimental challenges can be alleviated by introducing a segment-wise analysis approach and ultimately overcome by implementing a flat-top illumination profile for TIRF microscopy using a commercially-available beam-shaping device. The improvements with regards to homogeneous spatial resolution and precise kinetic information over the whole field-of-view were quantitatively assayed using DNA origami and cell samples. Our findings open the door to high-throughput DNA-PAINT studies with thus far unprecedented accuracy for quantitative data interpretation
    corecore