3,714 research outputs found
Unsupervised Learning with Self-Organizing Spiking Neural Networks
We present a system comprising a hybridization of self-organized map (SOM)
properties with spiking neural networks (SNNs) that retain many of the features
of SOMs. Networks are trained in an unsupervised manner to learn a
self-organized lattice of filters via excitatory-inhibitory interactions among
populations of neurons. We develop and test various inhibition strategies, such
as growing with inter-neuron distance and two distinct levels of inhibition.
The quality of the unsupervised learning algorithm is evaluated using examples
with known labels. Several biologically-inspired classification tools are
proposed and compared, including population-level confidence rating, and
n-grams using spike motif algorithm. Using the optimal choice of parameters,
our approach produces improvements over state-of-art spiking neural networks
Stochastic Synapses Enable Efficient Brain-Inspired Learning Machines
Recent studies have shown that synaptic unreliability is a robust and
sufficient mechanism for inducing the stochasticity observed in cortex. Here,
we introduce Synaptic Sampling Machines, a class of neural network models that
uses synaptic stochasticity as a means to Monte Carlo sampling and unsupervised
learning. Similar to the original formulation of Boltzmann machines, these
models can be viewed as a stochastic counterpart of Hopfield networks, but
where stochasticity is induced by a random mask over the connections. Synaptic
stochasticity plays the dual role of an efficient mechanism for sampling, and a
regularizer during learning akin to DropConnect. A local synaptic plasticity
rule implementing an event-driven form of contrastive divergence enables the
learning of generative models in an on-line fashion. Synaptic sampling machines
perform equally well using discrete-timed artificial units (as in Hopfield
networks) or continuous-timed leaky integrate & fire neurons. The learned
representations are remarkably sparse and robust to reductions in bit precision
and synapse pruning: removal of more than 75% of the weakest connections
followed by cursory re-learning causes a negligible performance loss on
benchmark classification tasks. The spiking neuron-based synaptic sampling
machines outperform existing spike-based unsupervised learners, while
potentially offering substantial advantages in terms of power and complexity,
and are thus promising models for on-line learning in brain-inspired hardware
Is Spiking Secure? A Comparative Study on the Security Vulnerabilities of Spiking and Deep Neural Networks
Spiking Neural Networks (SNNs) claim to present many advantages in terms of
biological plausibility and energy efficiency compared to standard Deep Neural
Networks (DNNs). Recent works have shown that DNNs are vulnerable to
adversarial attacks, i.e., small perturbations added to the input data can lead
to targeted or random misclassifications. In this paper, we aim at
investigating the key research question: ``Are SNNs secure?'' Towards this, we
perform a comparative study of the security vulnerabilities in SNNs and DNNs
w.r.t. the adversarial noise. Afterwards, we propose a novel black-box attack
methodology, i.e., without the knowledge of the internal structure of the SNN,
which employs a greedy heuristic to automatically generate imperceptible and
robust adversarial examples (i.e., attack images) for the given SNN. We perform
an in-depth evaluation for a Spiking Deep Belief Network (SDBN) and a DNN
having the same number of layers and neurons (to obtain a fair comparison), in
order to study the efficiency of our methodology and to understand the
differences between SNNs and DNNs w.r.t. the adversarial examples. Our work
opens new avenues of research towards the robustness of the SNNs, considering
their similarities to the human brain's functionality.Comment: Accepted for publication at the 2020 International Joint Conference
on Neural Networks (IJCNN
Accelerated physical emulation of Bayesian inference in spiking neural networks
The massively parallel nature of biological information processing plays an
important role for its superiority to human-engineered computing devices. In
particular, it may hold the key to overcoming the von Neumann bottleneck that
limits contemporary computer architectures. Physical-model neuromorphic devices
seek to replicate not only this inherent parallelism, but also aspects of its
microscopic dynamics in analog circuits emulating neurons and synapses.
However, these machines require network models that are not only adept at
solving particular tasks, but that can also cope with the inherent
imperfections of analog substrates. We present a spiking network model that
performs Bayesian inference through sampling on the BrainScaleS neuromorphic
platform, where we use it for generative and discriminative computations on
visual data. By illustrating its functionality on this platform, we implicitly
demonstrate its robustness to various substrate-specific distortive effects, as
well as its accelerated capability for computation. These results showcase the
advantages of brain-inspired physical computation and provide important
building blocks for large-scale neuromorphic applications.Comment: This preprint has been published 2019 November 14. Please cite as:
Kungl A. F. et al. (2019) Accelerated Physical Emulation of Bayesian
Inference in Spiking Neural Networks. Front. Neurosci. 13:1201. doi:
10.3389/fnins.2019.0120
- …