152 research outputs found
Block-Matching Optical Flow for Dynamic Vision Sensor- Algorithm and FPGA Implementation
Rapid and low power computation of optical flow (OF) is potentially useful in
robotics. The dynamic vision sensor (DVS) event camera produces quick and
sparse output, and has high dynamic range, but conventional OF algorithms are
frame-based and cannot be directly used with event-based cameras. Previous DVS
OF methods do not work well with dense textured input and are designed for
implementation in logic circuits. This paper proposes a new block-matching
based DVS OF algorithm which is inspired by motion estimation methods used for
MPEG video compression. The algorithm was implemented both in software and on
FPGA. For each event, it computes the motion direction as one of 9 directions.
The speed of the motion is set by the sample interval. Results show that the
Average Angular Error can be improved by 30\% compared with previous methods.
The OF can be calculated on FPGA with 50\,MHz clock in 0.2\,us per event (11
clock cycles), 20 times faster than a Java software implementation running on a
desktop PC. Sample data is shown that the method works on scenes dominated by
edges, sparse features, and dense texture.Comment: Published in ISCAS 201
EV-FlowNet: Self-Supervised Optical Flow Estimation for Event-based Cameras
Event-based cameras have shown great promise in a variety of situations where
frame based cameras suffer, such as high speed motions and high dynamic range
scenes. However, developing algorithms for event measurements requires a new
class of hand crafted algorithms. Deep learning has shown great success in
providing model free solutions to many problems in the vision community, but
existing networks have been developed with frame based images in mind, and
there does not exist the wealth of labeled data for events as there does for
images for supervised training. To these points, we present EV-FlowNet, a novel
self-supervised deep learning pipeline for optical flow estimation for event
based cameras. In particular, we introduce an image based representation of a
given event stream, which is fed into a self-supervised neural network as the
sole input. The corresponding grayscale images captured from the same camera at
the same time as the events are then used as a supervisory signal to provide a
loss function at training time, given the estimated flow from the network. We
show that the resulting network is able to accurately predict optical flow from
events only in a variety of different scenes, with performance competitive to
image based networks. This method not only allows for accurate estimation of
dense optical flow, but also provides a framework for the transfer of other
self-supervised methods to the event-based domain.Comment: 9 pages, 5 figures, 1 table. Accompanying video:
https://youtu.be/eMHZBSoq0sE. Dataset:
https://daniilidis-group.github.io/mvsec/, Robotics: Science and Systems 201
End-to-End Learning of Representations for Asynchronous Event-Based Data
Event cameras are vision sensors that record asynchronous streams of
per-pixel brightness changes, referred to as "events". They have appealing
advantages over frame-based cameras for computer vision, including high
temporal resolution, high dynamic range, and no motion blur. Due to the sparse,
non-uniform spatiotemporal layout of the event signal, pattern recognition
algorithms typically aggregate events into a grid-based representation and
subsequently process it by a standard vision pipeline, e.g., Convolutional
Neural Network (CNN). In this work, we introduce a general framework to convert
event streams into grid-based representations through a sequence of
differentiable operations. Our framework comes with two main advantages: (i)
allows learning the input event representation together with the task dedicated
network in an end to end manner, and (ii) lays out a taxonomy that unifies the
majority of extant event representations in the literature and identifies novel
ones. Empirically, we show that our approach to learning the event
representation end-to-end yields an improvement of approximately 12% on optical
flow estimation and object recognition over state-of-the-art methods.Comment: To appear at ICCV 201
Simultaneous Optical Flow and Segmentation (SOFAS) using Dynamic Vision Sensor
We present an algorithm (SOFAS) to estimate the optical flow of events
generated by a dynamic vision sensor (DVS). Where traditional cameras produce
frames at a fixed rate, DVSs produce asynchronous events in response to
intensity changes with a high temporal resolution. Our algorithm uses the fact
that events are generated by edges in the scene to not only estimate the
optical flow but also to simultaneously segment the image into objects which
are travelling at the same velocity. This way it is able to avoid the aperture
problem which affects other implementations such as Lucas-Kanade. Finally, we
show that SOFAS produces more accurate results than traditional optic flow
algorithms
Event-based Vision: A Survey
Event cameras are bio-inspired sensors that differ from conventional frame
cameras: Instead of capturing images at a fixed rate, they asynchronously
measure per-pixel brightness changes, and output a stream of events that encode
the time, location and sign of the brightness changes. Event cameras offer
attractive properties compared to traditional cameras: high temporal resolution
(in the order of microseconds), very high dynamic range (140 dB vs. 60 dB), low
power consumption, and high pixel bandwidth (on the order of kHz) resulting in
reduced motion blur. Hence, event cameras have a large potential for robotics
and computer vision in challenging scenarios for traditional cameras, such as
low-latency, high speed, and high dynamic range. However, novel methods are
required to process the unconventional output of these sensors in order to
unlock their potential. This paper provides a comprehensive overview of the
emerging field of event-based vision, with a focus on the applications and the
algorithms developed to unlock the outstanding properties of event cameras. We
present event cameras from their working principle, the actual sensors that are
available and the tasks that they have been used for, from low-level vision
(feature detection and tracking, optic flow, etc.) to high-level vision
(reconstruction, segmentation, recognition). We also discuss the techniques
developed to process events, including learning-based techniques, as well as
specialized processors for these novel sensors, such as spiking neural
networks. Additionally, we highlight the challenges that remain to be tackled
and the opportunities that lie ahead in the search for a more efficient,
bio-inspired way for machines to perceive and interact with the world
- …