20,882 research outputs found
An Event-Driven Multi-Kernel Convolution Processor Module for Event-Driven Vision Sensors
Event-Driven vision sensing is a new way of sensing
visual reality in a frame-free manner. This is, the vision sensor
(camera) is not capturing a sequence of still frames, as in conventional
video and computer vision systems. In Event-Driven sensors
each pixel autonomously and asynchronously decides when to
send its address out. This way, the sensor output is a continuous
stream of address events representing reality dynamically continuously
and without constraining to frames. In this paper we present
an Event-Driven Convolution Module for computing 2D convolutions
on such event streams. The Convolution Module has been
designed to assemble many of them for building modular and hierarchical
Convolutional Neural Networks for robust shape and
pose invariant object recognition. The Convolution Module has
multi-kernel capability. This is, it will select the convolution kernel
depending on the origin of the event. A proof-of-concept test prototype
has been fabricated in a 0.35 m CMOS process and extensive
experimental results are provided. The Convolution Processor has
also been combined with an Event-Driven Dynamic Vision Sensor
(DVS) for high-speed recognition examples. The chip can discriminate
propellers rotating at 2 k revolutions per second, detect symbols
on a 52 card deck when browsing all cards in 410 ms, or detect
and follow the center of a phosphor oscilloscope trace rotating at
5 KHz.Unión Europea 216777 (NABAB)Ministerio de Ciencia e Innovación TEC2009-10639-C04-0
Event-based Vision: A Survey
Event cameras are bio-inspired sensors that differ from conventional frame
cameras: Instead of capturing images at a fixed rate, they asynchronously
measure per-pixel brightness changes, and output a stream of events that encode
the time, location and sign of the brightness changes. Event cameras offer
attractive properties compared to traditional cameras: high temporal resolution
(in the order of microseconds), very high dynamic range (140 dB vs. 60 dB), low
power consumption, and high pixel bandwidth (on the order of kHz) resulting in
reduced motion blur. Hence, event cameras have a large potential for robotics
and computer vision in challenging scenarios for traditional cameras, such as
low-latency, high speed, and high dynamic range. However, novel methods are
required to process the unconventional output of these sensors in order to
unlock their potential. This paper provides a comprehensive overview of the
emerging field of event-based vision, with a focus on the applications and the
algorithms developed to unlock the outstanding properties of event cameras. We
present event cameras from their working principle, the actual sensors that are
available and the tasks that they have been used for, from low-level vision
(feature detection and tracking, optic flow, etc.) to high-level vision
(reconstruction, segmentation, recognition). We also discuss the techniques
developed to process events, including learning-based techniques, as well as
specialized processors for these novel sensors, such as spiking neural
networks. Additionally, we highlight the challenges that remain to be tackled
and the opportunities that lie ahead in the search for a more efficient,
bio-inspired way for machines to perceive and interact with the world
Multi-Sensor Event Detection using Shape Histograms
Vehicular sensor data consists of multiple time-series arising from a number
of sensors. Using such multi-sensor data we would like to detect occurrences of
specific events that vehicles encounter, e.g., corresponding to particular
maneuvers that a vehicle makes or conditions that it encounters. Events are
characterized by similar waveform patterns re-appearing within one or more
sensors. Further such patterns can be of variable duration. In this work, we
propose a method for detecting such events in time-series data using a novel
feature descriptor motivated by similar ideas in image processing. We define
the shape histogram: a constant dimension descriptor that nevertheless captures
patterns of variable duration. We demonstrate the efficacy of using shape
histograms as features to detect events in an SVM-based, multi-sensor,
supervised learning scenario, i.e., multiple time-series are used to detect an
event. We present results on real-life vehicular sensor data and show that our
technique performs better than available pattern detection implementations on
our data, and that it can also be used to combine features from multiple
sensors resulting in better accuracy than using any single sensor. Since
previous work on pattern detection in time-series has been in the single series
context, we also present results using our technique on multiple standard
time-series datasets and show that it is the most versatile in terms of how it
ranks compared to other published results
A USB3.0 FPGA Event-based Filtering and Tracking Framework for Dynamic Vision Sensors
Dynamic vision sensors (DVS) are frame-free sensors
with an asynchronous variable-rate output that is ideal for hard
real-time dynamic vision applications under power and latency
constraints. Post-processing of the digital sensor output can
reduce sensor noise, extract low level features, and track objects
using simple algorithms that have previously been implemented
in software. In this paper we present an FPGA-based framework
for event-based processing that allows uncorrelated-event noise
removal and real-time tracking of multiple objects, with dynamic
capabilities to adapt itself to fast or slow and large or small
objects. This framework uses a new hardware platform based on
a Lattice FPGA which filters the sensor output and which then
transmits the results through a super-speed Cypress FX3 USB
microcontroller interface to a host computer. The packets of
events and timestamps are transmitted to the host computer at
rates of 10 Mega events per second. Experimental results are
presented that demonstrate a low latency of 10us for tracking
and computing the center of mass of a detected object.Ministerio de Economía y Competitividad TEC2012-37868-C04-0
Independent Motion Detection with Event-driven Cameras
Unlike standard cameras that send intensity images at a constant frame rate,
event-driven cameras asynchronously report pixel-level brightness changes,
offering low latency and high temporal resolution (both in the order of
micro-seconds). As such, they have great potential for fast and low power
vision algorithms for robots. Visual tracking, for example, is easily achieved
even for very fast stimuli, as only moving objects cause brightness changes.
However, cameras mounted on a moving robot are typically non-stationary and the
same tracking problem becomes confounded by background clutter events due to
the robot ego-motion. In this paper, we propose a method for segmenting the
motion of an independently moving object for event-driven cameras. Our method
detects and tracks corners in the event stream and learns the statistics of
their motion as a function of the robot's joint velocities when no
independently moving objects are present. During robot operation, independently
moving objects are identified by discrepancies between the predicted corner
velocities from ego-motion and the measured corner velocities. We validate the
algorithm on data collected from the neuromorphic iCub robot. We achieve a
precision of ~ 90 % and show that the method is robust to changes in speed of
both the head and the target.Comment: 7 pages, 6 figure
- …