14 research outputs found
Event-based Asynchronous Sparse Convolutional Networks
Event cameras are bio-inspired sensors that respond to per-pixel brightness
changes in the form of asynchronous and sparse "events". Recently, pattern
recognition algorithms, such as learning-based methods, have made significant
progress with event cameras by converting events into synchronous dense,
image-like representations and applying traditional machine learning methods
developed for standard cameras. However, these approaches discard the spatial
and temporal sparsity inherent in event data at the cost of higher
computational complexity and latency. In this work, we present a general
framework for converting models trained on synchronous image-like event
representations into asynchronous models with identical output, thus directly
leveraging the intrinsic asynchronous and sparse nature of the event data. We
show both theoretically and experimentally that this drastically reduces the
computational complexity and latency of high-capacity, synchronous neural
networks without sacrificing accuracy. In addition, our framework has several
desirable characteristics: (i) it exploits spatio-temporal sparsity of events
explicitly, (ii) it is agnostic to the event representation, network
architecture, and task, and (iii) it does not require any train-time change,
since it is compatible with the standard neural networks' training process. We
thoroughly validate the proposed framework on two computer vision tasks: object
detection and object recognition. In these tasks, we reduce the computational
complexity up to 20 times with respect to high-latency neural networks. At the
same time, we outperform state-of-the-art asynchronous approaches up to 24% in
prediction accuracy
Asynchronous Corner Tracking Algorithm based on Lifetime of Events for DAVIS Cameras
Event cameras, i.e., the Dynamic and Active-pixel Vision Sensor (DAVIS) ones,
capture the intensity changes in the scene and generates a stream of events in
an asynchronous fashion. The output rate of such cameras can reach up to 10
million events per second in high dynamic environments. DAVIS cameras use novel
vision sensors that mimic human eyes. Their attractive attributes, such as high
output rate, High Dynamic Range (HDR), and high pixel bandwidth, make them an
ideal solution for applications that require high-frequency tracking. Moreover,
applications that operate in challenging lighting scenarios can exploit the
high HDR of event cameras, i.e., 140 dB compared to 60 dB of traditional
cameras. In this paper, a novel asynchronous corner tracking method is proposed
that uses both events and intensity images captured by a DAVIS camera. The
Harris algorithm is used to extract features, i.e., frame-corners from
keyframes, i.e., intensity images. Afterward, a matching algorithm is used to
extract event-corners from the stream of events. Events are solely used to
perform asynchronous tracking until the next keyframe is captured. Neighboring
events, within a window size of 5x5 pixels around the event-corner, are used to
calculate the velocity and direction of extracted event-corners by fitting the
2D planar using a randomized Hough transform algorithm. Experimental evaluation
showed that our approach is able to update the location of the extracted
corners up to 100 times during the blind time of traditional cameras, i.e.,
between two consecutive intensity images.Comment: Accepted to 15th International Symposium on Visual Computing
(ISVC2020
Asynchronous, Photometric Feature Tracking using Events and Frames
We present a method that leverages the complementarity of event cameras and
standard cameras to track visual features with low-latency. Event cameras are
novel sensors that output pixel-level brightness changes, called "events". They
offer significant advantages over standard cameras, namely a very high dynamic
range, no motion blur, and a latency in the order of microseconds. However,
because the same scene pattern can produce different events depending on the
motion direction, establishing event correspondences across time is
challenging. By contrast, standard cameras provide intensity measurements
(frames) that do not depend on motion direction. Our method extracts features
on frames and subsequently tracks them asynchronously using events, thereby
exploiting the best of both types of data: the frames provide a photometric
representation that does not depend on motion direction and the events provide
low-latency updates. In contrast to previous works, which are based on
heuristics, this is the first principled method that uses raw intensity
measurements directly, based on a generative event model within a
maximum-likelihood framework. As a result, our method produces feature tracks
that are both more accurate (subpixel accuracy) and longer than the state of
the art, across a wide variety of scenes.Comment: 22 pages, 15 figures, Video: https://youtu.be/A7UfeUnG6c
Semi-Dense 3D Reconstruction with a Stereo Event Camera
Event cameras are bio-inspired sensors that offer several advantages, such as
low latency, high-speed and high dynamic range, to tackle challenging scenarios
in computer vision. This paper presents a solution to the problem of 3D
reconstruction from data captured by a stereo event-camera rig moving in a
static scene, such as in the context of stereo Simultaneous Localization and
Mapping. The proposed method consists of the optimization of an energy function
designed to exploit small-baseline spatio-temporal consistency of events
triggered across both stereo image planes. To improve the density of the
reconstruction and to reduce the uncertainty of the estimation, a probabilistic
depth-fusion strategy is also developed. The resulting method has no special
requirements on either the motion of the stereo event-camera rig or on prior
knowledge about the scene. Experiments demonstrate our method can deal with
both texture-rich scenes as well as sparse scenes, outperforming
state-of-the-art stereo methods based on event data image representations.Comment: 19 pages, 8 figures, Video: https://youtu.be/Qrnpj2FD1e