27,100 research outputs found

    Event-based Vision: A Survey

    Get PDF
    Event cameras are bio-inspired sensors that differ from conventional frame cameras: Instead of capturing images at a fixed rate, they asynchronously measure per-pixel brightness changes, and output a stream of events that encode the time, location and sign of the brightness changes. Event cameras offer attractive properties compared to traditional cameras: high temporal resolution (in the order of microseconds), very high dynamic range (140 dB vs. 60 dB), low power consumption, and high pixel bandwidth (on the order of kHz) resulting in reduced motion blur. Hence, event cameras have a large potential for robotics and computer vision in challenging scenarios for traditional cameras, such as low-latency, high speed, and high dynamic range. However, novel methods are required to process the unconventional output of these sensors in order to unlock their potential. This paper provides a comprehensive overview of the emerging field of event-based vision, with a focus on the applications and the algorithms developed to unlock the outstanding properties of event cameras. We present event cameras from their working principle, the actual sensors that are available and the tasks that they have been used for, from low-level vision (feature detection and tracking, optic flow, etc.) to high-level vision (reconstruction, segmentation, recognition). We also discuss the techniques developed to process events, including learning-based techniques, as well as specialized processors for these novel sensors, such as spiking neural networks. Additionally, we highlight the challenges that remain to be tackled and the opportunities that lie ahead in the search for a more efficient, bio-inspired way for machines to perceive and interact with the world

    Automatic refocus and feature extraction of single-look complex SAR signatures of vessels

    Get PDF
    In recent years, spaceborne synthetic aperture radar ( SAR) technology has been considered as a complement to cooperative vessel surveillance systems thanks to its imaging capabilities. In this paper, a processing chain is presented to explore the potential of using basic stripmap single-look complex ( SLC) SAR images of vessels for the automatic extraction of their dimensions and heading. Local autofocus is applied to the vessels' SAR signatures to compensate blurring artefacts in the azimuth direction, improving both their image quality and their estimated dimensions. For the heading, the orientation ambiguities of the vessels' SAR signatures are solved using the direction of their ground-range velocity from the analysis of their Doppler spectra. Preliminary results are provided using five images of vessels from SLC RADARSAT-2 stripmap images. These results have shown good agreement with their respective ground-truth data from Automatic Identification System ( AIS) records at the time of the acquisitions.Postprint (published version

    Bimodal waveguide interferometer RI sensor fabricated on low-cost polymer platform

    Get PDF
    A refractive index sensor based on bimodal waveguide interferometer is demonstrated on the low-cost polymer platform for the first time. Different from conventional interferometers which make use of the interference between the light from two arms, bimodal waveguide interferometers utilize the interference between the two different internal modes in the waveguide. Since the utilized first higher mode has a wide evanescent tail which interacts with the external environment, the interferometer can reach a high sensitivity. Instead of vertical bimodal structure which is normally employed, the lateral bimodal waveguide is adopted in order to simplify the fabrication process. A unique offset between the centers of single mode waveguide and bimodal waveguide is designed to excite the two different modes with equal power which contributes to the maximum fringe visibility. The bimodal waveguide interferometer is finally fabricated on optical polymer (Ormocore) which is transparent at both infrared and visible wavelengths. It is fabricated using the UV-based soft imprint technique which is simple and reproductive. The bulk sensitivity of fabricated interferometer sensor with a 5 mm sensing length is characterized using different mass concentration sodium chloride solutions. The sensitivity is obtained as 316 pi rad/RIU and the extinction ratio can reach 18 dB

    Fluorescence monitoring of capilarry electrophoresis separation in a lab-on-a-chip with monolithically integrated waveguides

    Get PDF
    Femtosecond-laser-written optical waveguides were monolithically integrated into a commercial lab-on-a-chip to intersect a microfluidic channel. Laser excitation through these waveguides confines the excitation window to a width of 12 ÎŒm, enabling high-spatial-resolution monitoring of different fluorescent analytes, during their migration/separation in the microfluidic channel by capillary electrophoresis. Wavelength-selective monitoring of the on-chip separation of fluorescent dyes is implemented as a proof-of-principle. We envision well-controlled microfluidic plug formation, waveguide excitation, and a low limit of detection to enable monitoring of extremely small quantities with high spatial resolution

    Digital implementation of the cellular sensor-computers

    Get PDF
    Two different kinds of cellular sensor-processor architectures are used nowadays in various applications. The first is the traditional sensor-processor architecture, where the sensor and the processor arrays are mapped into each other. The second is the foveal architecture, in which a small active fovea is navigating in a large sensor array. This second architecture is introduced and compared here. Both of these architectures can be implemented with analog and digital processor arrays. The efficiency of the different implementation types, depending on the used CMOS technology, is analyzed. It turned out, that the finer the technology is, the better to use digital implementation rather than analog

    R&D Paths of Pixel Detectors for Vertex Tracking and Radiation Imaging

    Full text link
    This report reviews current trends in the R&D of semiconductor pixellated sensors for vertex tracking and radiation imaging. It identifies requirements of future HEP experiments at colliders, needed technological breakthroughs and highlights the relation to radiation detection and imaging applications in other fields of science.Comment: 17 pages, 2 figures, submitted to the European Strategy Preparatory Grou

    A review of advances in pixel detectors for experiments with high rate and radiation

    Full text link
    The Large Hadron Collider (LHC) experiments ATLAS and CMS have established hybrid pixel detectors as the instrument of choice for particle tracking and vertexing in high rate and radiation environments, as they operate close to the LHC interaction points. With the High Luminosity-LHC upgrade now in sight, for which the tracking detectors will be completely replaced, new generations of pixel detectors are being devised. They have to address enormous challenges in terms of data throughput and radiation levels, ionizing and non-ionizing, that harm the sensing and readout parts of pixel detectors alike. Advances in microelectronics and microprocessing technologies now enable large scale detector designs with unprecedented performance in measurement precision (space and time), radiation hard sensors and readout chips, hybridization techniques, lightweight supports, and fully monolithic approaches to meet these challenges. This paper reviews the world-wide effort on these developments.Comment: 84 pages with 46 figures. Review article.For submission to Rep. Prog. Phy

    Communication channel analysis and real time compressed sensing for high density neural recording devices

    Get PDF
    Next generation neural recording and Brain- Machine Interface (BMI) devices call for high density or distributed systems with more than 1000 recording sites. As the recording site density grows, the device generates data on the scale of several hundred megabits per second (Mbps). Transmitting such large amounts of data induces significant power consumption and heat dissipation for the implanted electronics. Facing these constraints, efficient on-chip compression techniques become essential to the reduction of implanted systems power consumption. This paper analyzes the communication channel constraints for high density neural recording devices. This paper then quantifies the improvement on communication channel using efficient on-chip compression methods. Finally, This paper describes a Compressed Sensing (CS) based system that can reduce the data rate by > 10x times while using power on the order of a few hundred nW per recording channel
    • 

    corecore