9 research outputs found
An Efficient Spiking Neural Network for Recognizing Gestures with a DVS Camera on the Loihi Neuromorphic Processor
Spiking Neural Networks (SNNs), the third generation NNs, have come under the spotlight for machine learning based applications due to their biological plausibility and reduced complexity compared to traditional artificial Deep Neural Networks (DNNs). These SNNs can be implemented with extreme energy efficiency on neuromorphic processors like the Intel Loihi research chip, and fed by event-based sensors, such as DVS cameras. However, DNNs with many layers can achieve relatively high accuracy on image classification and recognition tasks, as the research on learning rules for SNNs for real-world applications is still not mature. The accuracy results for SNNs are typically obtained either by converting the trained DNNs into SNNs, or by directly designing and training SNNs in the spiking domain. Towards the conversion from a DNN to an SNN, we perform a comprehensive analysis of such process, specifically designed for Intel Loihi, showing our methodology for the design of an SNN that achieves nearly the same accuracy results as its corresponding DNN. Towards the usage of the event-based sensors, we design a pre-processing method, evaluated for the DvsGesture dataset, which makes it possible to be used in the DNN domain. Hence, based on the outcome of the first analysis, we train a DNN for the pre-processed DvsGesture dataset, and convert it into the spike domain for its deployment on Intel Loihi, which enables real-time gesture recognition. The results show that our SNN achieves 89.64% classification accuracy and occupies only 37 Loihi cores
DVS-Attacks: Adversarial Attacks on Dynamic Vision Sensors for Spiking Neural Networks
Spiking Neural Networks (SNNs), despite being energy-efficient when
implemented on neuromorphic hardware and coupled with event-based Dynamic
Vision Sensors (DVS), are vulnerable to security threats, such as adversarial
attacks, i.e., small perturbations added to the input for inducing a
misclassification. Toward this, we propose DVS-Attacks, a set of stealthy yet
efficient adversarial attack methodologies targeted to perturb the event
sequences that compose the input of the SNNs. First, we show that noise filters
for DVS can be used as defense mechanisms against adversarial attacks.
Afterwards, we implement several attacks and test them in the presence of two
types of noise filters for DVS cameras. The experimental results show that the
filters can only partially defend the SNNs against our proposed DVS-Attacks.
Using the best settings for the noise filters, our proposed Mask Filter-Aware
Dash Attack reduces the accuracy by more than 20% on the DVS-Gesture dataset
and by more than 65% on the MNIST dataset, compared to the original clean
frames. The source code of all the proposed DVS-Attacks and noise filters is
released at https://github.com/albertomarchisio/DVS-Attacks.Comment: Accepted for publication at IJCNN 202
Q-SpiNN: A Framework for Quantizing Spiking Neural Networks
A prominent technique for reducing the memory footprint of Spiking Neural
Networks (SNNs) without decreasing the accuracy significantly is quantization.
However, the state-of-the-art only focus on employing the weight quantization
directly from a specific quantization scheme, i.e., either the post-training
quantization (PTQ) or the in-training quantization (ITQ), and do not consider
(1) quantizing other SNN parameters (e.g., neuron membrane potential), (2)
exploring different combinations of quantization approaches (i.e., quantization
schemes, precision levels, and rounding schemes), and (3) selecting the SNN
model with a good memory-accuracy trade-off at the end. Therefore, the memory
saving offered by these state-of-the-art to meet the targeted accuracy is
limited, thereby hindering processing SNNs on the resource-constrained systems
(e.g., the IoT-Edge devices). Towards this, we propose Q-SpiNN, a novel
quantization framework for memory-efficient SNNs. The key mechanisms of the
Q-SpiNN are: (1) employing quantization for different SNN parameters based on
their significance to the accuracy, (2) exploring different combinations of
quantization schemes, precision levels, and rounding schemes to find efficient
SNN model candidates, and (3) developing an algorithm that quantifies the
benefit of the memory-accuracy trade-off obtained by the candidates, and
selects the Pareto-optimal one. The experimental results show that, for the
unsupervised network, the Q-SpiNN reduces the memory footprint by ca. 4x, while
maintaining the accuracy within 1% from the baseline on the MNIST dataset. For
the supervised network, the Q-SpiNN reduces the memory by ca. 2x, while keeping
the accuracy within 2% from the baseline on the DVS-Gesture dataset.Comment: Accepted for publication at the 2021 International Joint Conference
on Neural Networks (IJCNN), July 2021, Virtual Even
CarSNN: An Efficient Spiking Neural Network for Event-Based Autonomous Cars on the Loihi Neuromorphic Research Processor
Autonomous Driving (AD) related features provide new forms of mobility that
are also beneficial for other kind of intelligent and autonomous systems like
robots, smart transportation, and smart industries. For these applications, the
decisions need to be made fast and in real-time. Moreover, in the quest for
electric mobility, this task must follow low power policy, without affecting
much the autonomy of the mean of transport or the robot. These two challenges
can be tackled using the emerging Spiking Neural Networks (SNNs). When deployed
on a specialized neuromorphic hardware, SNNs can achieve high performance with
low latency and low power consumption. In this paper, we use an SNN connected
to an event-based camera for facing one of the key problems for AD, i.e., the
classification between cars and other objects. To consume less power than
traditional frame-based cameras, we use a Dynamic Vision Sensor (DVS). The
experiments are made following an offline supervised learning rule, followed by
mapping the learnt SNN model on the Intel Loihi Neuromorphic Research Chip. Our
best experiment achieves an accuracy on offline implementation of 86%, that
drops to 83% when it is ported onto the Loihi Chip. The Neuromorphic Hardware
implementation has maximum 0.72 ms of latency for every sample, and consumes
only 310 mW. To the best of our knowledge, this work is the first
implementation of an event-based car classifier on a Neuromorphic Chip.Comment: Accepted for publication at IJCNN 202
FSpiNN: An Optimization Framework for Memory- and Energy-Efficient Spiking Neural Networks
Spiking Neural Networks (SNNs) are gaining interest due to their event-driven
processing which potentially consumes low power/energy computations in hardware
platforms, while offering unsupervised learning capability due to the
spike-timing-dependent plasticity (STDP) rule. However, state-of-the-art SNNs
require a large memory footprint to achieve high accuracy, thereby making them
difficult to be deployed on embedded systems, for instance on battery-powered
mobile devices and IoT Edge nodes. Towards this, we propose FSpiNN, an
optimization framework for obtaining memory- and energy-efficient SNNs for
training and inference processing, with unsupervised learning capability while
maintaining accuracy. It is achieved by (1) reducing the computational
requirements of neuronal and STDP operations, (2) improving the accuracy of
STDP-based learning, (3) compressing the SNN through a fixed-point
quantization, and (4) incorporating the memory and energy requirements in the
optimization process. FSpiNN reduces the computational requirements by reducing
the number of neuronal operations, the STDP-based synaptic weight updates, and
the STDP complexity. To improve the accuracy of learning, FSpiNN employs
timestep-based synaptic weight updates, and adaptively determines the STDP
potentiation factor and the effective inhibition strength. The experimental
results show that, as compared to the state-of-the-art work, FSpiNN achieves
7.5x memory saving, and improves the energy-efficiency by 3.5x on average for
training and by 1.8x on average for inference, across MNIST and Fashion MNIST
datasets, with no accuracy loss for a network with 4900 excitatory neurons,
thereby enabling energy-efficient SNNs for edge devices/embedded systems.Comment: To appear at the IEEE Transactions on Computer-Aided Design of
Integrated Circuits and Systems (IEEE-TCAD), as part of the ESWEEK-TCAD
Special Issue, September 202
Recommended from our members
EVA London 2022: Electronic Visualisation and the Arts
The Electronic Visualisation and the Arts London 2022 Conference (EVA London 2022) is co-sponsored by the Computer Arts Society (CAS) and BCS, the Chartered Institute for IT, of which the CAS is a Specialist Group. Of course, this has been a difficult time for all conferences, with the Covid-19 pandemic. For the first time since 2019, the EVA London 2022 Conference is a physical conference. It is also an online conference, as it was in the previous two years. We continue with publishing the proceedings, both online, with open access via ScienceOpen, and also in our traditional printed form, for the second year in full colour. Over recent decades, the EVA London Conference on Electronic Visualisation and the Arts has established itself as one of the United Kingdom’s most innovative and interdisciplinary conferences. It brings together a wide range of research domains to celebrate a diverse set of interests, with a specialised focus on visualisation. The long and short papers in this volume cover varied topics concerning the arts, visualisations, and IT, including 3D graphics, animation, artificial intelligence, creativity, culture, design, digital art, ethics, heritage, literature, museums, music, philosophy, politics, publishing, social media, and virtual reality, as well as other related interdisciplinary areas.
The EVA London 2022 proceedings presents a wide spectrum of papers, demonstrations, Research Workshop contributions, other workshops, and for the seventh year, the EVA London Symposium, in the form of an opening morning session, with three invited contributors. The conference includes a number of other associated evening events including ones organised by the Computer Arts Society, Art in Flux, and EVA International. As in previous years, there are Research Workshop contributions in this volume, aimed at encouraging participation by postgraduate students and early-career artists, accepted either through the peer-review process or directly by the Research Workshop chair. The Research Workshop contributors are offered bursaries to aid participation. In particular, EVA London liaises with Art in Flux, a London-based group of digital artists. The EVA London 2022 proceedings includes long papers and short “poster” papers from international researchers inside and outside academia, from graduate artists, PhD students, industry professionals, established scholars, and senior researchers, who value EVA London for its interdisciplinary community. The conference also features keynote talks. A special feature this year is support for Ukrainian culture after its invasion earlier in the year. This publication has resulted from a selective peer review process, fitting as many excellent submissions as possible into the proceedings.
This year, submission numbers were lower than previous years, mostly likely due to the pandemic and a new requirement to submit drafts of long papers for review as well as abstracts. It is still pleasing to have so many good proposals from which to select the papers that have been included. EVA London is part of a larger network of EVA international conferences. EVA events have been held in Athens, Beijing, Berlin, Brussels, California, Cambridge (both UK and USA), Canberra, Copenhagen, Dallas, Delhi, Edinburgh, Florence, Gifu (Japan), Glasgow, Harvard, Jerusalem, Kiev, Laval, London, Madrid, Montreal, Moscow, New York, Paris, Prague, St Petersburg, Thessaloniki, and Warsaw. Further venues for EVA conferences are very much encouraged by the EVA community. As noted earlier, this volume is a record of accepted submissions to EVA London 2022. Associated online presentations are in general recorded and made available online after the conference