5 research outputs found
Low Latency Event-Based Filtering and Feature Extraction for Dynamic Vision Sensors in Real-Time FPGA Applications
Dynamic Vision Sensor (DVS) pixels produce an asynchronous variable-rate address-event
output that represents brightness changes at the pixel. Since these sensors produce frame-free output, they
are ideal for real-time dynamic vision applications with real-time latency and power system constraints.
Event-based ltering algorithms have been proposed to post-process the asynchronous event output to
reduce sensor noise, extract low level features, and track objects, among others. These postprocessing
algorithms help to increase the performance and accuracy of further processing for tasks such as classi cation
using spike-based learning (ie. ConvNets), stereo vision, and visually-servoed robots, etc. This paper
presents an FPGA-based library of these postprocessing event-based algorithms with implementation details;
speci cally background activity (noise) ltering, pixel masking, object motion detection and object tracking.
The latencies of these lters on the Field Programmable Gate Array (FPGA) platform are below 300ns with
an average latency reduction of 188% (maximum of 570%) over the software versions running on a desktop
PC CPU. This open-source event-based lter IP library for FPGA has been tested on two different platforms
and scenarios using different synthesis and implementation tools for Lattice and Xilinx vendors
DVS-Attacks: Adversarial Attacks on Dynamic Vision Sensors for Spiking Neural Networks
Spiking Neural Networks (SNNs), despite being energy-efficient when
implemented on neuromorphic hardware and coupled with event-based Dynamic
Vision Sensors (DVS), are vulnerable to security threats, such as adversarial
attacks, i.e., small perturbations added to the input for inducing a
misclassification. Toward this, we propose DVS-Attacks, a set of stealthy yet
efficient adversarial attack methodologies targeted to perturb the event
sequences that compose the input of the SNNs. First, we show that noise filters
for DVS can be used as defense mechanisms against adversarial attacks.
Afterwards, we implement several attacks and test them in the presence of two
types of noise filters for DVS cameras. The experimental results show that the
filters can only partially defend the SNNs against our proposed DVS-Attacks.
Using the best settings for the noise filters, our proposed Mask Filter-Aware
Dash Attack reduces the accuracy by more than 20% on the DVS-Gesture dataset
and by more than 65% on the MNIST dataset, compared to the original clean
frames. The source code of all the proposed DVS-Attacks and noise filters is
released at https://github.com/albertomarchisio/DVS-Attacks.Comment: Accepted for publication at IJCNN 202
e-TLD: Event-based Framework for Dynamic Object Tracking
This paper presents a long-term object tracking framework with a moving event
camera under general tracking conditions. A first of its kind for these
revolutionary cameras, the tracking framework uses a discriminative
representation for the object with online learning, and detects and re-tracks
the object when it comes back into the field-of-view. One of the key novelties
is the use of an event-based local sliding window technique that tracks
reliably in scenes with cluttered and textured background. In addition,
Bayesian bootstrapping is used to assist real-time processing and boost the
discriminative power of the object representation. On the other hand, when the
object re-enters the field-of-view of the camera, a data-driven, global sliding
window detector locates the object for subsequent tracking. Extensive
experiments demonstrate the ability of the proposed framework to track and
detect arbitrary objects of various shapes and sizes, including dynamic objects
such as a human. This is a significant improvement compared to earlier works
that simply track objects as long as they are visible under simpler background
settings. Using the ground truth locations for five different objects under
three motion settings, namely translation, rotation and 6-DOF, quantitative
measurement is reported for the event-based tracking framework with critical
insights on various performance issues. Finally, real-time implementation in
C++ highlights tracking ability under scale, rotation, view-point and occlusion
scenarios in a lab setting.Comment: 11 pages, 10 figure
Low-power dynamic object detection and classification with freely moving event cameras
We present the first purely event-based, energy-efficient approach for dynamic object detection and categorization with a freely moving event camera. Compared to traditional cameras, event-based object recognition systems are considerably behind in terms of accuracy and algorithmic maturity. To this end, this paper presents an event-based feature extraction method devised by accumulating local activity across the image frame and then applying principal component analysis (PCA) to the normalized neighborhood region. Subsequently, we propose a backtracking-free k-d tree mechanism for efficient feature matching by taking advantage of the low-dimensionality of the feature representation. Additionally, the proposed k-d tree mechanism allows for feature selection to obtain a lower-dimensional object representation when hardware resources are limited to implement PCA. Consequently, the proposed system can be realized on a field-programmable gate array (FPGA) device leading to high performance over resource ratio. The proposed system is tested on real-world event-based datasets for object categorization, showing superior classification performance compared to state-of-the-art algorithms. Additionally, we verified the real-time FPGA performance of the proposed object detection method, trained with limited data as opposed to deep learning methods, under a closed-loop aerial vehicle flight mode. We also compare the proposed object categorization framework to pre-trained convolutional neural networks using transfer learning and highlight the drawbacks of using frame-based sensors under dynamic camera motion. Finally, we provide critical insights about the feature extraction method and the classification parameters on the system performance, which aids in understanding the framework to suit various low-power (less than a few watts) application scenarios