117 research outputs found
AER Building Blocks for Multi-Layer Multi-Chip Neuromorphic Vision Systems
A 5-layer neuromorphic vision processor whose components
communicate spike events asychronously using the address-eventrepresentation
(AER) is demonstrated. The system includes a retina
chip, two convolution chips, a 2D winner-take-all chip, a delay line
chip, a learning classifier chip, and a set of PCBs for computer
interfacing and address space remappings. The components use a
mixture of analog and digital computation and will learn to classify
trajectories of a moving object. A complete experimental setup and
measurements results are shown.Unión Europea IST-2001-34124 (CAVIAR)Ministerio de Ciencia y TecnologÃa TIC-2003-08164-C0
Asynchronous Corner Tracking Algorithm based on Lifetime of Events for DAVIS Cameras
Event cameras, i.e., the Dynamic and Active-pixel Vision Sensor (DAVIS) ones,
capture the intensity changes in the scene and generates a stream of events in
an asynchronous fashion. The output rate of such cameras can reach up to 10
million events per second in high dynamic environments. DAVIS cameras use novel
vision sensors that mimic human eyes. Their attractive attributes, such as high
output rate, High Dynamic Range (HDR), and high pixel bandwidth, make them an
ideal solution for applications that require high-frequency tracking. Moreover,
applications that operate in challenging lighting scenarios can exploit the
high HDR of event cameras, i.e., 140 dB compared to 60 dB of traditional
cameras. In this paper, a novel asynchronous corner tracking method is proposed
that uses both events and intensity images captured by a DAVIS camera. The
Harris algorithm is used to extract features, i.e., frame-corners from
keyframes, i.e., intensity images. Afterward, a matching algorithm is used to
extract event-corners from the stream of events. Events are solely used to
perform asynchronous tracking until the next keyframe is captured. Neighboring
events, within a window size of 5x5 pixels around the event-corner, are used to
calculate the velocity and direction of extracted event-corners by fitting the
2D planar using a randomized Hough transform algorithm. Experimental evaluation
showed that our approach is able to update the location of the extracted
corners up to 100 times during the blind time of traditional cameras, i.e.,
between two consecutive intensity images.Comment: Accepted to 15th International Symposium on Visual Computing
(ISVC2020
Asynchronous, Photometric Feature Tracking using Events and Frames
We present a method that leverages the complementarity of event cameras and
standard cameras to track visual features with low-latency. Event cameras are
novel sensors that output pixel-level brightness changes, called "events". They
offer significant advantages over standard cameras, namely a very high dynamic
range, no motion blur, and a latency in the order of microseconds. However,
because the same scene pattern can produce different events depending on the
motion direction, establishing event correspondences across time is
challenging. By contrast, standard cameras provide intensity measurements
(frames) that do not depend on motion direction. Our method extracts features
on frames and subsequently tracks them asynchronously using events, thereby
exploiting the best of both types of data: the frames provide a photometric
representation that does not depend on motion direction and the events provide
low-latency updates. In contrast to previous works, which are based on
heuristics, this is the first principled method that uses raw intensity
measurements directly, based on a generative event model within a
maximum-likelihood framework. As a result, our method produces feature tracks
that are both more accurate (subpixel accuracy) and longer than the state of
the art, across a wide variety of scenes.Comment: 22 pages, 15 figures, Video: https://youtu.be/A7UfeUnG6c
Semi-Dense 3D Reconstruction with a Stereo Event Camera
Event cameras are bio-inspired sensors that offer several advantages, such as
low latency, high-speed and high dynamic range, to tackle challenging scenarios
in computer vision. This paper presents a solution to the problem of 3D
reconstruction from data captured by a stereo event-camera rig moving in a
static scene, such as in the context of stereo Simultaneous Localization and
Mapping. The proposed method consists of the optimization of an energy function
designed to exploit small-baseline spatio-temporal consistency of events
triggered across both stereo image planes. To improve the density of the
reconstruction and to reduce the uncertainty of the estimation, a probabilistic
depth-fusion strategy is also developed. The resulting method has no special
requirements on either the motion of the stereo event-camera rig or on prior
knowledge about the scene. Experiments demonstrate our method can deal with
both texture-rich scenes as well as sparse scenes, outperforming
state-of-the-art stereo methods based on event data image representations.Comment: 19 pages, 8 figures, Video: https://youtu.be/Qrnpj2FD1e
Focal-Plane Change Triggered Video Compression for Low-Power Vision Sensor Systems
Video sensors with embedded compression offer significant energy savings in transmission but incur energy losses in the complexity of the encoder. Energy efficient video compression architectures for CMOS image sensors with focal-plane change detection are presented and analyzed. The compression architectures use pixel-level computational circuits to minimize energy usage by selectively processing only pixels which generate significant temporal intensity changes. Using the temporal intensity change detection to gate the operation of a differential DCT based encoder achieves nearly identical image quality to traditional systems (4dB decrease in PSNR) while reducing the amount of data that is processed by 67% and reducing overall power consumption reduction of 51%. These typical energy savings, resulting from the sparsity of motion activity in the visual scene, demonstrate the utility of focal-plane change triggered compression to surveillance vision systems
- …