11 research outputs found

    Flow-Based Visual Stream Compression for Event Cameras

    Full text link
    As the use of neuromorphic, event-based vision sensors expands, the need for compression of their output streams has increased. While their operational principle ensures event streams are spatially sparse, the high temporal resolution of the sensors can result in high data rates from the sensor depending on scene dynamics. For systems operating in communication-bandwidth-constrained and power-constrained environments, it is essential to compress these streams before transmitting them to a remote receiver. Therefore, we introduce a flow-based method for the real-time asynchronous compression of event streams as they are generated. This method leverages real-time optical flow estimates to predict future events without needing to transmit them, therefore, drastically reducing the amount of data transmitted. The flow-based compression introduced is evaluated using a variety of methods including spatiotemporal distance between event streams. The introduced method itself is shown to achieve an average compression ratio of 2.81 on a variety of event-camera datasets with the evaluation configuration used. That compression is achieved with a median temporal error of 0.48 ms and an average spatiotemporal event-stream distance of 3.07. When combined with LZMA compression for non-real-time applications, our method can achieve state-of-the-art average compression ratios ranging from 10.45 to 17.24. Additionally, we demonstrate that the proposed prediction algorithm is capable of performing real-time, low-latency event prediction.Comment: 13 pages, 7 figures, 2 table

    Neutron-Induced, Single-Event Effects on Neuromorphic Event-based Vision Sensor: A First Step Towards Space Applications

    Full text link
    This paper studies the suitability of neuromorphic event-based vision cameras for spaceflight, and the effects of neutron radiation on their performance. Neuromorphic event-based vision cameras are novel sensors that implement asynchronous, clockless data acquisition, providing information about the change in illuminance greater than 120dB with sub-millisecond temporal precision. These sensors have huge potential for space applications as they provide an extremely sparse representation of visual dynamics while removing redundant information, thereby conforming to low-resource requirements. An event-based sensor was irradiated under wide-spectrum neutrons at Los Alamos Neutron Science Center and its effects were classified. We found that the sensor had very fast recovery during radiation, showing high correlation of noise event bursts with respect to source macro-pulses. No significant differences were observed between the number of events induced at different angles of incidence but significant differences were found in the spatial structure of noise events at different angles. The results show that event-based cameras are capable of functioning in a space-like, radiative environment with a signal-to-noise ratio of 3.355. They also show that radiation-induced noise does not affect event-level computation. We also introduce the Event-based Radiation-Induced Noise Simulation Environment (Event-RINSE), a simulation environment based on the noise-modelling we conducted and capable of injecting the effects of radiation-induced noise from the collected data to any stream of events in order to ensure that developed code can operate in a radiative environment. To the best of our knowledge, this is the first time such analysis of neutron-induced noise analysis has been performed on a neuromorphic vision sensor, and this study shows the advantage of using such sensors for space applications

    Real-time high speed motion prediction using fast aperture-robust event-driven visual flow

    No full text
    International audienceOptical flow is a crucial component of the feature space for early visual processing of dynamic scenes especially in new applications such as self-driving vehicles, drones and autonomous robots. The dynamic vision sensors are well suited for such applications because of their asynchronous, sparse and temporally precise representation of the visual dynamics. Many algorithms proposed for computing visual flow for these sensors suffer from the aperture problem as the direction of the estimated flow is governed by the curvature of the object rather than the true motion direction. Some methods that do overcome this problem by temporal windowing under-utilize the true precise temporal nature of the dynamic sensors. In this paper, we propose a novel multi-scale plane fitting based visual flow algorithm that is robust to the aperture problem and also computationally fast and efficient. Our algorithm performs well in many scenarios ranging from fixed camera recording simple geometric shapes to real world scenarios such as camera mounted on a moving car and can successfully perform event-by-event motion estimation of objects in the scene to allow for predictions of upto 500 ms i.e., equivalent to 10 to 25 frames with traditional cameras

    hARMS: A Hardware Acceleration Architecture for Real-Time Event-Based Optical Flow

    No full text
    Event-based vision sensors produce asynchronous event streams with high temporal resolution based on changes in the visual scene. The properties of these sensors allow for accurate and fast calculation of optical flow as events are generated. Existing solutions for calculating optical flow from event data either fail to capture the true direction of motion due to the aperture problem, do not use the high temporal resolution of the sensor, or are too computationally expensive to be run in real time on embedded platforms. In this research, we first present a faster version of our previous algorithm, ARMS (Aperture Robust Multi-Scale flow). The new optimized software version (fARMS) significantly improves throughput on a traditional CPU. Further, we present hARMS, a hardware realization of the fARMS algorithm allowing for real-time computation of true flow on low-power, embedded platforms. The proposed hARMS architecture targets hybrid system-on-chip devices and was designed to maximize configurability and throughput. The hardware architecture and fARMS algorithm were developed with asynchronous neuromorphic processing in mind, abandoning the common use of an event frame and instead operating using only a small history of relevant events, allowing latency to scale independently of the sensor resolution. This change in processing paradigm improved the estimation of flow directions by up to 73% compared to the existing method and yielded a demonstrated hARMS throughput of up to 1.21 Mevent/s on the benchmark configuration selected. This throughput enables real-time performance and makes it the fastest known realization of aperture-robust, event-based optical flow to date.Comment: 18 pages, 16 figures, 4 table

    Transport-Independent Protocols for Universal AER Communications

    No full text
    The emergence of Address-Event Representation (AER) as a general communications method across a large variety of neural devices suggests that they might be made interoperable. If there were a standard AER interface, systems could communicate using native AER signalling, allowing the construction of large-scale, real-time, heterogeneous neural systems. We propose a transport-agnostic AER protocol that permits direct bidirectional event communications between systems over Ethernet, and demonstrate practical implementations that connect a neuromimetic chip: SpiNNaker, both to standard host PCs and to real-time robotic systems. The protocol specifies a header and packet format that supports a variety of different possible packet types while coping with questions of data alignment, time sequencing, and packet compression. Such a model creates a flexible solution either for real-time communications between neural devices or for live spike I/O and visualisation in a host PC. With its standard physical layer and flexible protocol, the specification provides a prototype for AER protocol standardisation that is at once compatible with legacy systems and expressive enough for future very-large-scale neural systems

    What can neuromorphic event-driven precise timing add to spike-based pattern recognition?

    No full text
    This letter introduces a study to precisely measure what an increase in spike timing precision can add to spike-driven pattern recognition algorithms. The concept of generating spikes from images by converting gray levels into spike timings is currently at the basis of almost every spike-based modeling of biological visual systems. The use of images naturally leads to generating incorrect artificial and redundant spike timings and, more important, also contradicts biological findings indicating that visual processing is massively parallel, asynchronous with high temporal resolution. A new concept for acquiring visual information through pixel-individual asynchronous level-crossing sampling has been proposed in a recent generation of asynchronous neuromorphic visual sensors. Unlike conventional cameras, these sensors acquire data not at fixed points in time for the entire array but at fixed amplitude changes of their input, resulting optimally sparse in space and time—pixel individually and precisely timed only if new, (previously unknown) information is available (event based). This letter uses the high temporal resolution spiking output of neuromorphic event-based visual sensors to show that lowering time precision degrades performance on several recognition tasks specifically when reaching the conventional range of machine vision acquisition frequencies (30–60 Hz). The use of information theory to characterize separability between classes for each temporal resolution shows that high temporal acquisition provides up to 70% more information that conventional spikes generated from frame-based acquisition as used in standard artificial vision, thus drastically increasing the separability between classes of objects. Experiments on real data show that the amount of information loss is correlated with temporal precision. Our information-theoretic study highlights the potentials of neuromorphic asynchronous visual sensors for both practical applications and theoretical investigations. Moreover, it suggests that representing visual information as a precise sequence of spike times as reported in the retina offers considerable advantages for neuro-inspired visual computations

    Optogenetic therapy: high spatiotemporal resolution and pattern discrimination compatible with vision restoration in non-human primates

    No full text
    International audienceVision restoration is an ideal medical application for optogenetics, because the eye provides direct optical access to the retina for stimulation. Optogenetic therapy could be used for diseases involving photoreceptor degeneration, such as retinitis pigmentosa or age-related macular degeneration. We describe here the selection, in non-human primates, of a specific optogenetic construct currently tested in a clinical trial. We used the microbial opsin ChrimsonR, and showed that the AAV2.7m8 vector had a higher transfection efficiency than AAV2 in retinal ganglion cells (RGCs) and that ChrimsonR fused to tdTomato (ChR-tdT) was expressed more efficiently than ChrimsonR. Light at 600 nm activated RGCs transfected with AAV2.7m8 ChR-tdT, from an irradiance of 1015 photons.cm-2.s-1. Vector doses of 5 × 1010 and 5 × 1011 vg/eye transfected up to 7000 RGCs/mm2 in the perifovea, with no significant immune reaction. We recorded RGC responses from a stimulus duration of 1 ms upwards. When using the recorded activity to decode stimulus information, we obtained an estimated visual acuity of 20/249, above the level of legal blindness (20/400). These results lay the groundwork for the ongoing clinical trial with the AAV2.7m8 - ChR-tdT vector for vision restoration in patients with retinitis pigmentosa
    corecore