1,093 research outputs found
Improving classification accuracy of feedforward neural networks for spiking neuromorphic chips
Deep Neural Networks (DNN) achieve human level performance in many image
analytics tasks but DNNs are mostly deployed to GPU platforms that consume a
considerable amount of power. New hardware platforms using lower precision
arithmetic achieve drastic reductions in power consumption. More recently,
brain-inspired spiking neuromorphic chips have achieved even lower power
consumption, on the order of milliwatts, while still offering real-time
processing.
However, for deploying DNNs to energy efficient neuromorphic chips the
incompatibility between continuous neurons and synaptic weights of traditional
DNNs, discrete spiking neurons and synapses of neuromorphic chips need to be
overcome. Previous work has achieved this by training a network to learn
continuous probabilities, before it is deployed to a neuromorphic architecture,
such as IBM TrueNorth Neurosynaptic System, by random sampling these
probabilities.
The main contribution of this paper is a new learning algorithm that learns a
TrueNorth configuration ready for deployment. We achieve this by training
directly a binary hardware crossbar that accommodates the TrueNorth axon
configuration constrains and we propose a different neuron model.
Results of our approach trained on electroencephalogram (EEG) data show a
significant improvement with previous work (76% vs 86% accuracy) while
maintaining state of the art performance on the MNIST handwritten data set.Comment: IJCAI-2017. arXiv admin note: text overlap with arXiv:1605.0774
Event-Driven Contrastive Divergence for Spiking Neuromorphic Systems
Restricted Boltzmann Machines (RBMs) and Deep Belief Networks have been
demonstrated to perform efficiently in a variety of applications, such as
dimensionality reduction, feature learning, and classification. Their
implementation on neuromorphic hardware platforms emulating large-scale
networks of spiking neurons can have significant advantages from the
perspectives of scalability, power dissipation and real-time interfacing with
the environment. However the traditional RBM architecture and the commonly used
training algorithm known as Contrastive Divergence (CD) are based on discrete
updates and exact arithmetics which do not directly map onto a dynamical neural
substrate. Here, we present an event-driven variation of CD to train a RBM
constructed with Integrate & Fire (I&F) neurons, that is constrained by the
limitations of existing and near future neuromorphic hardware platforms. Our
strategy is based on neural sampling, which allows us to synthesize a spiking
neural network that samples from a target Boltzmann distribution. The recurrent
activity of the network replaces the discrete steps of the CD algorithm, while
Spike Time Dependent Plasticity (STDP) carries out the weight updates in an
online, asynchronous fashion. We demonstrate our approach by training an RBM
composed of leaky I&F neurons with STDP synapses to learn a generative model of
the MNIST hand-written digit dataset, and by testing it in recognition,
generation and cue integration tasks. Our results contribute to a machine
learning-driven approach for synthesizing networks of spiking neurons capable
of carrying out practical, high-level functionality.Comment: (Under review
Delay Learning Architectures for Memory and Classification
We present a neuromorphic spiking neural network, the DELTRON, that can
remember and store patterns by changing the delays of every connection as
opposed to modifying the weights. The advantage of this architecture over
traditional weight based ones is simpler hardware implementation without
multipliers or digital-analog converters (DACs) as well as being suited to
time-based computing. The name is derived due to similarity in the learning
rule with an earlier architecture called Tempotron. The DELTRON can remember
more patterns than other delay-based networks by modifying a few delays to
remember the most 'salient' or synchronous part of every spike pattern. We
present simulations of memory capacity and classification ability of the
DELTRON for different random spatio-temporal spike patterns. The memory
capacity for noisy spike patterns and missing spikes are also shown. Finally,
we present SPICE simulation results of the core circuits involved in a
reconfigurable mixed signal implementation of this architecture.Comment: 27 pages, 20 figure
Demonstrating Advantages of Neuromorphic Computation: A Pilot Study
Neuromorphic devices represent an attempt to mimic aspects of the brain's
architecture and dynamics with the aim of replicating its hallmark functional
capabilities in terms of computational power, robust learning and energy
efficiency. We employ a single-chip prototype of the BrainScaleS 2 neuromorphic
system to implement a proof-of-concept demonstration of reward-modulated
spike-timing-dependent plasticity in a spiking network that learns to play the
Pong video game by smooth pursuit. This system combines an electronic
mixed-signal substrate for emulating neuron and synapse dynamics with an
embedded digital processor for on-chip learning, which in this work also serves
to simulate the virtual environment and learning agent. The analog emulation of
neuronal membrane dynamics enables a 1000-fold acceleration with respect to
biological real-time, with the entire chip operating on a power budget of 57mW.
Compared to an equivalent simulation using state-of-the-art software, the
on-chip emulation is at least one order of magnitude faster and three orders of
magnitude more energy-efficient. We demonstrate how on-chip learning can
mitigate the effects of fixed-pattern noise, which is unavoidable in analog
substrates, while making use of temporal variability for action exploration.
Learning compensates imperfections of the physical substrate, as manifested in
neuronal parameter variability, by adapting synaptic weights to match
respective excitability of individual neurons.Comment: Added measurements with noise in NEST simulation, add notice about
journal publication. Frontiers in Neuromorphic Engineering (2019
Neuromorphic Learning towards Nano Second Precision
Temporal coding is one approach to representing information in spiking neural
networks. An example of its application is the location of sounds by barn owls
that requires especially precise temporal coding. Dependent upon the azimuthal
angle, the arrival times of sound signals are shifted between both ears. In
order to deter- mine these interaural time differences, the phase difference of
the signals is measured. We implemented this biologically inspired network on a
neuromorphic hardware system and demonstrate spike-timing dependent plasticity
on an analog, highly accelerated hardware substrate. Our neuromorphic
implementation enables the resolution of time differences of less than 50 ns.
On-chip Hebbian learning mechanisms select inputs from a pool of neurons which
code for the same sound frequency. Hence, noise caused by different synaptic
delays across these inputs is reduced. Furthermore, learning compensates for
variations on neuronal and synaptic parameters caused by device mismatch
intrinsic to the neuromorphic substrate.Comment: 7 pages, 7 figures, presented at IJCNN 2013 in Dallas, TX, USA. IJCNN
2013. Corrected version with updated STDP curves IJCNN 201
Stochastic Synapses Enable Efficient Brain-Inspired Learning Machines
Recent studies have shown that synaptic unreliability is a robust and
sufficient mechanism for inducing the stochasticity observed in cortex. Here,
we introduce Synaptic Sampling Machines, a class of neural network models that
uses synaptic stochasticity as a means to Monte Carlo sampling and unsupervised
learning. Similar to the original formulation of Boltzmann machines, these
models can be viewed as a stochastic counterpart of Hopfield networks, but
where stochasticity is induced by a random mask over the connections. Synaptic
stochasticity plays the dual role of an efficient mechanism for sampling, and a
regularizer during learning akin to DropConnect. A local synaptic plasticity
rule implementing an event-driven form of contrastive divergence enables the
learning of generative models in an on-line fashion. Synaptic sampling machines
perform equally well using discrete-timed artificial units (as in Hopfield
networks) or continuous-timed leaky integrate & fire neurons. The learned
representations are remarkably sparse and robust to reductions in bit precision
and synapse pruning: removal of more than 75% of the weakest connections
followed by cursory re-learning causes a negligible performance loss on
benchmark classification tasks. The spiking neuron-based synaptic sampling
machines outperform existing spike-based unsupervised learners, while
potentially offering substantial advantages in terms of power and complexity,
and are thus promising models for on-line learning in brain-inspired hardware
- …