10 research outputs found
Asynchronous Convolutional Networks for Object Detection in Neuromorphic Cameras
Event-based cameras, also known as neuromorphic cameras, are bioinspired
sensors able to perceive changes in the scene at high frequency with low power
consumption. Becoming available only very recently, a limited amount of work
addresses object detection on these devices. In this paper we propose two
neural networks architectures for object detection: YOLE, which integrates the
events into surfaces and uses a frame-based model to process them, and fcYOLE,
an asynchronous event-based fully convolutional network which uses a novel and
general formalization of the convolutional and max pooling layers to exploit
the sparsity of camera events. We evaluate the algorithm with different
extensions of publicly available datasets and on a novel synthetic dataset.Comment: accepted at CVPR2019 Event-based Vision Worksho
e-TLD: Event-based Framework for Dynamic Object Tracking
This paper presents a long-term object tracking framework with a moving event
camera under general tracking conditions. A first of its kind for these
revolutionary cameras, the tracking framework uses a discriminative
representation for the object with online learning, and detects and re-tracks
the object when it comes back into the field-of-view. One of the key novelties
is the use of an event-based local sliding window technique that tracks
reliably in scenes with cluttered and textured background. In addition,
Bayesian bootstrapping is used to assist real-time processing and boost the
discriminative power of the object representation. On the other hand, when the
object re-enters the field-of-view of the camera, a data-driven, global sliding
window detector locates the object for subsequent tracking. Extensive
experiments demonstrate the ability of the proposed framework to track and
detect arbitrary objects of various shapes and sizes, including dynamic objects
such as a human. This is a significant improvement compared to earlier works
that simply track objects as long as they are visible under simpler background
settings. Using the ground truth locations for five different objects under
three motion settings, namely translation, rotation and 6-DOF, quantitative
measurement is reported for the event-based tracking framework with critical
insights on various performance issues. Finally, real-time implementation in
C++ highlights tracking ability under scale, rotation, view-point and occlusion
scenarios in a lab setting.Comment: 11 pages, 10 figure
Neutron-Induced, Single-Event Effects on Neuromorphic Event-based Vision Sensor: A First Step Towards Space Applications
This paper studies the suitability of neuromorphic event-based vision cameras
for spaceflight, and the effects of neutron radiation on their performance.
Neuromorphic event-based vision cameras are novel sensors that implement
asynchronous, clockless data acquisition, providing information about the
change in illuminance greater than 120dB with sub-millisecond temporal
precision. These sensors have huge potential for space applications as they
provide an extremely sparse representation of visual dynamics while removing
redundant information, thereby conforming to low-resource requirements. An
event-based sensor was irradiated under wide-spectrum neutrons at Los Alamos
Neutron Science Center and its effects were classified. We found that the
sensor had very fast recovery during radiation, showing high correlation of
noise event bursts with respect to source macro-pulses. No significant
differences were observed between the number of events induced at different
angles of incidence but significant differences were found in the spatial
structure of noise events at different angles. The results show that
event-based cameras are capable of functioning in a space-like, radiative
environment with a signal-to-noise ratio of 3.355. They also show that
radiation-induced noise does not affect event-level computation. We also
introduce the Event-based Radiation-Induced Noise Simulation Environment
(Event-RINSE), a simulation environment based on the noise-modelling we
conducted and capable of injecting the effects of radiation-induced noise from
the collected data to any stream of events in order to ensure that developed
code can operate in a radiative environment. To the best of our knowledge, this
is the first time such analysis of neutron-induced noise analysis has been
performed on a neuromorphic vision sensor, and this study shows the advantage
of using such sensors for space applications
Asynchronous Tracking-by-Detection on Adaptive Time Surfaces for Event-based Object Tracking
Event cameras, which are asynchronous bio-inspired vision sensors, have shown
great potential in a variety of situations, such as fast motion and low
illumination scenes. However, most of the event-based object tracking methods
are designed for scenarios with untextured objects and uncluttered backgrounds.
There are few event-based object tracking methods that support bounding
box-based object tracking. The main idea behind this work is to propose an
asynchronous Event-based Tracking-by-Detection (ETD) method for generic
bounding box-based object tracking. To achieve this goal, we present an
Adaptive Time-Surface with Linear Time Decay (ATSLTD) event-to-frame conversion
algorithm, which asynchronously and effectively warps the spatio-temporal
information of asynchronous retinal events to a sequence of ATSLTD frames with
clear object contours. We feed the sequence of ATSLTD frames to the proposed
ETD method to perform accurate and efficient object tracking, which leverages
the high temporal resolution property of event cameras. We compare the proposed
ETD method with seven popular object tracking methods, that are based on
conventional cameras or event cameras, and two variants of ETD. The
experimental results show the superiority of the proposed ETD method in
handling various challenging environments.Comment: 9 pages, 5 figure
Long-term object tracking with a moving event camera
This paper presents a long-term object tracking algorithm for event cameras. A first of its kind for these revolutionary cameras, the tracking framework uses a discriminative representation for the object with online learning, and detects and re-tracks the object when it comes back into the field-of-view. One of the key novelties is the use of an event-based local sliding window technique that performs reliably in scenes with cluttered and textured background. In addition, Bayesian bootstrapping is used to assist real-time processing and boost the discriminative power of the object representation. Extensive experiments on a publicly available event camera dataset demonstrates the ability to track and detect arbitrary objects of various shapes and sizes. This is a significant improvement compared to earlier works that simply track objects as long as they are visible under simpler background settings. In other words, when the object re-enters the field-of-view of the camera, a data-driven, global sliding window based detector locates the object under different view-point conditions for subsequent tracking