9,092 research outputs found
Recommended from our members
O(N)-Space Spatiotemporal Filter for Reducing Noise in Neuromorphic Vision Sensors
Neuromorphic vision sensors are an emerging technology inspired by how retina processing images. A neuromorphic vision sensor only reports when a pixel value changes rather than continuously outputting the value every frame as is done in an 'ordinary' Active Pixel Sensor (ASP). This move from a continuously sampled system to an asynchronous event driven one effectively allows for much faster sampling rates; it also fundamentally changes the sensor interface. In particular, these sensors are highly sensitive to noise, as any additional event reduces the bandwidth, and thus effectively lowers the sampling rate. In this work we introduce a novel spatiotemporal filter with O(N)O(N) memory complexity for reducing background activity noise in neuromorphic vision sensors. Our design consumes 10× less memory and has 100× reduction in error compared to previous designs. Our filter is also capable of recovering real events and can pass up to 180 percent more real events
A USB3.0 FPGA Event-based Filtering and Tracking Framework for Dynamic Vision Sensors
Dynamic vision sensors (DVS) are frame-free sensors
with an asynchronous variable-rate output that is ideal for hard
real-time dynamic vision applications under power and latency
constraints. Post-processing of the digital sensor output can
reduce sensor noise, extract low level features, and track objects
using simple algorithms that have previously been implemented
in software. In this paper we present an FPGA-based framework
for event-based processing that allows uncorrelated-event noise
removal and real-time tracking of multiple objects, with dynamic
capabilities to adapt itself to fast or slow and large or small
objects. This framework uses a new hardware platform based on
a Lattice FPGA which filters the sensor output and which then
transmits the results through a super-speed Cypress FX3 USB
microcontroller interface to a host computer. The packets of
events and timestamps are transmitted to the host computer at
rates of 10 Mega events per second. Experimental results are
presented that demonstrate a low latency of 10us for tracking
and computing the center of mass of a detected object.Ministerio de EconomÃa y Competitividad TEC2012-37868-C04-0
Dynamic Decomposition of Spatiotemporal Neural Signals
Neural signals are characterized by rich temporal and spatiotemporal dynamics
that reflect the organization of cortical networks. Theoretical research has
shown how neural networks can operate at different dynamic ranges that
correspond to specific types of information processing. Here we present a data
analysis framework that uses a linearized model of these dynamic states in
order to decompose the measured neural signal into a series of components that
capture both rhythmic and non-rhythmic neural activity. The method is based on
stochastic differential equations and Gaussian process regression. Through
computer simulations and analysis of magnetoencephalographic data, we
demonstrate the efficacy of the method in identifying meaningful modulations of
oscillatory signals corrupted by structured temporal and spatiotemporal noise.
These results suggest that the method is particularly suitable for the analysis
and interpretation of complex temporal and spatiotemporal neural signals
Event-based Vision: A Survey
Event cameras are bio-inspired sensors that differ from conventional frame
cameras: Instead of capturing images at a fixed rate, they asynchronously
measure per-pixel brightness changes, and output a stream of events that encode
the time, location and sign of the brightness changes. Event cameras offer
attractive properties compared to traditional cameras: high temporal resolution
(in the order of microseconds), very high dynamic range (140 dB vs. 60 dB), low
power consumption, and high pixel bandwidth (on the order of kHz) resulting in
reduced motion blur. Hence, event cameras have a large potential for robotics
and computer vision in challenging scenarios for traditional cameras, such as
low-latency, high speed, and high dynamic range. However, novel methods are
required to process the unconventional output of these sensors in order to
unlock their potential. This paper provides a comprehensive overview of the
emerging field of event-based vision, with a focus on the applications and the
algorithms developed to unlock the outstanding properties of event cameras. We
present event cameras from their working principle, the actual sensors that are
available and the tasks that they have been used for, from low-level vision
(feature detection and tracking, optic flow, etc.) to high-level vision
(reconstruction, segmentation, recognition). We also discuss the techniques
developed to process events, including learning-based techniques, as well as
specialized processors for these novel sensors, such as spiking neural
networks. Additionally, we highlight the challenges that remain to be tackled
and the opportunities that lie ahead in the search for a more efficient,
bio-inspired way for machines to perceive and interact with the world
- …