2 research outputs found
Multi-Objective Optimization for Size and Resilience of Spiking Neural Networks
Inspired by the connectivity mechanisms in the brain, neuromorphic computing
architectures model Spiking Neural Networks (SNNs) in silicon. As such,
neuromorphic architectures are designed and developed with the goal of having
small, low power chips that can perform control and machine learning tasks.
However, the power consumption of the developed hardware can greatly depend on
the size of the network that is being evaluated on the chip. Furthermore, the
accuracy of a trained SNN that is evaluated on chip can change due to voltage
and current variations in the hardware that perturb the learned weights of the
network. While efforts are made on the hardware side to minimize those
perturbations, a software based strategy to make the deployed networks more
resilient can help further alleviate that issue. In this work, we study Spiking
Neural Networks in two neuromorphic architecture implementations with the goal
of decreasing their size, while at the same time increasing their resiliency to
hardware faults. We leverage an evolutionary algorithm to train the SNNs and
propose a multiobjective fitness function to optimize the size and resiliency
of the SNN. We demonstrate that this strategy leads to well-performing,
small-sized networks that are more resilient to hardware faults.Comment: Will appear in proceedings of 2019 IEEE 10th Annual Ubiquitous
Computing, Electronics & Mobile Communication Conference (UEMCON). IEEE
Catalog Number: CFP19G31-USB ISBN: 978-1-7281-3884-8 pg. 431-43
Inherent Adversarial Robustness of Deep Spiking Neural Networks: Effects of Discrete Input Encoding and Non-Linear Activations
In the recent quest for trustworthy neural networks, we present Spiking
Neural Network (SNN) as a potential candidate for inherent robustness against
adversarial attacks. In this work, we demonstrate that adversarial accuracy of
SNNs under gradient-based attacks is higher than their non-spiking counterparts
for CIFAR datasets on deep VGG and ResNet architectures, particularly in
blackbox attack scenario. We attribute this robustness to two fundamental
characteristics of SNNs and analyze their effects. First, we exhibit that input
discretization introduced by the Poisson encoder improves adversarial
robustness with reduced number of timesteps. Second, we quantify the amount of
adversarial accuracy with increased leak rate in Leaky-Integrate-Fire (LIF)
neurons. Our results suggest that SNNs trained with LIF neurons and smaller
number of timesteps are more robust than the ones with IF (Integrate-Fire)
neurons and larger number of timesteps. Also we overcome the bottleneck of
creating gradient-based adversarial inputs in temporal domain by proposing a
technique for crafting attacks from SNNComment: Accepted in 16th European Conference on Computer Vision (ECCV 2020