6,158 research outputs found
A spiking half-cognitive model for classification
This paper describes a spiking neural network that learns classes. Following a classic Psychological task, the model learns some types of classes better than other types, so the net is a spiking cognitive model of classification. A simulated neural system, derived from an existing model, learns natural kinds, but is unable to form sufficient attractor states for all of the types of classes. An extension of the model, using a combination of singleton and triplets of input features, learns all of the types. The models make use of a principled mechanism for spontaneous firing, and a compensatory Hebbian learning rule. Combined, the mechanisms allow learning to spread to neurons not directly stimulated by the environment. The overall network learns the types of classes in a fashion broadly consistent with the Psychological data. However, the order of speed of learning the types is not entirely consistent with the
Psychological data, but may be consistent with one of two Psychological systems a given person possesses. A Psychological test of this hypothesis is proposed
A spiking half-cognitive model for classification
This paper describes a spiking neural network that learns classes. Following a classic Psychological task, the model learns some types of classes better than other types, so the net is a spiking cognitive model of classification. A simulated neural system, derived from an existing model, learns natural kinds, but is unable to form sufficient attractor states for all of the types of classes. An extension of the model, using a combination of singleton and triplets of input features, learns all of the types. The models make use of a principled mechanism for spontaneous firing, and a compensatory Hebbian learning rule. Combined, the mechanisms allow learning to spread to neurons not directly stimulated by the environment. The overall network learns the types of classes in a fashion broadly consistent with the Psychological data. However, the order of speed of learning the types is not entirely consistent with the
Psychological data, but may be consistent with one of two Psychological systems a given person possesses. A Psychological test of this hypothesis is proposed
Stochastic Synapses Enable Efficient Brain-Inspired Learning Machines
Recent studies have shown that synaptic unreliability is a robust and
sufficient mechanism for inducing the stochasticity observed in cortex. Here,
we introduce Synaptic Sampling Machines, a class of neural network models that
uses synaptic stochasticity as a means to Monte Carlo sampling and unsupervised
learning. Similar to the original formulation of Boltzmann machines, these
models can be viewed as a stochastic counterpart of Hopfield networks, but
where stochasticity is induced by a random mask over the connections. Synaptic
stochasticity plays the dual role of an efficient mechanism for sampling, and a
regularizer during learning akin to DropConnect. A local synaptic plasticity
rule implementing an event-driven form of contrastive divergence enables the
learning of generative models in an on-line fashion. Synaptic sampling machines
perform equally well using discrete-timed artificial units (as in Hopfield
networks) or continuous-timed leaky integrate & fire neurons. The learned
representations are remarkably sparse and robust to reductions in bit precision
and synapse pruning: removal of more than 75% of the weakest connections
followed by cursory re-learning causes a negligible performance loss on
benchmark classification tasks. The spiking neuron-based synaptic sampling
machines outperform existing spike-based unsupervised learners, while
potentially offering substantial advantages in terms of power and complexity,
and are thus promising models for on-line learning in brain-inspired hardware
Event-Driven Contrastive Divergence for Spiking Neuromorphic Systems
Restricted Boltzmann Machines (RBMs) and Deep Belief Networks have been
demonstrated to perform efficiently in a variety of applications, such as
dimensionality reduction, feature learning, and classification. Their
implementation on neuromorphic hardware platforms emulating large-scale
networks of spiking neurons can have significant advantages from the
perspectives of scalability, power dissipation and real-time interfacing with
the environment. However the traditional RBM architecture and the commonly used
training algorithm known as Contrastive Divergence (CD) are based on discrete
updates and exact arithmetics which do not directly map onto a dynamical neural
substrate. Here, we present an event-driven variation of CD to train a RBM
constructed with Integrate & Fire (I&F) neurons, that is constrained by the
limitations of existing and near future neuromorphic hardware platforms. Our
strategy is based on neural sampling, which allows us to synthesize a spiking
neural network that samples from a target Boltzmann distribution. The recurrent
activity of the network replaces the discrete steps of the CD algorithm, while
Spike Time Dependent Plasticity (STDP) carries out the weight updates in an
online, asynchronous fashion. We demonstrate our approach by training an RBM
composed of leaky I&F neurons with STDP synapses to learn a generative model of
the MNIST hand-written digit dataset, and by testing it in recognition,
generation and cue integration tasks. Our results contribute to a machine
learning-driven approach for synthesizing networks of spiking neurons capable
of carrying out practical, high-level functionality.Comment: (Under review
Algorithm and Hardware Design of Discrete-Time Spiking Neural Networks Based on Back Propagation with Binary Activations
We present a new back propagation based training algorithm for discrete-time
spiking neural networks (SNN). Inspired by recent deep learning algorithms on
binarized neural networks, binary activation with a straight-through gradient
estimator is used to model the leaky integrate-fire spiking neuron, overcoming
the difficulty in training SNNs using back propagation. Two SNN training
algorithms are proposed: (1) SNN with discontinuous integration, which is
suitable for rate-coded input spikes, and (2) SNN with continuous integration,
which is more general and can handle input spikes with temporal information.
Neuromorphic hardware designed in 40nm CMOS exploits the spike sparsity and
demonstrates high classification accuracy (>98% on MNIST) and low energy
(48.4-773 nJ/image).Comment: 2017 IEEE Biomedical Circuits and Systems (BioCAS
Spiking neurons with short-term synaptic plasticity form superior generative networks
Spiking networks that perform probabilistic inference have been proposed both
as models of cortical computation and as candidates for solving problems in
machine learning. However, the evidence for spike-based computation being in
any way superior to non-spiking alternatives remains scarce. We propose that
short-term plasticity can provide spiking networks with distinct computational
advantages compared to their classical counterparts. In this work, we use
networks of leaky integrate-and-fire neurons that are trained to perform both
discriminative and generative tasks in their forward and backward information
processing paths, respectively. During training, the energy landscape
associated with their dynamics becomes highly diverse, with deep attractor
basins separated by high barriers. Classical algorithms solve this problem by
employing various tempering techniques, which are both computationally
demanding and require global state updates. We demonstrate how similar results
can be achieved in spiking networks endowed with local short-term synaptic
plasticity. Additionally, we discuss how these networks can even outperform
tempering-based approaches when the training data is imbalanced. We thereby
show how biologically inspired, local, spike-triggered synaptic dynamics based
simply on a limited pool of synaptic resources can allow spiking networks to
outperform their non-spiking relatives.Comment: corrected typo in abstrac
An Efficient Method for online Detection of Polychronous Patterns in Spiking Neural Network
Polychronous neural groups are effective structures for the recognition of
precise spike-timing patterns but the detection method is an inefficient
multi-stage brute force process that works off-line on pre-recorded simulation
data. This work presents a new model of polychronous patterns that can capture
precise sequences of spikes directly in the neural simulation. In this scheme,
each neuron is assigned a randomized code that is used to tag the post-synaptic
neurons whenever a spike is transmitted. This creates a polychronous code that
preserves the order of pre-synaptic activity and can be registered in a hash
table when the post-synaptic neuron spikes. A polychronous code is a
sub-component of a polychronous group that will occur, along with others, when
the group is active. We demonstrate the representational and pattern
recognition ability of polychronous codes on a direction selective visual task
involving moving bars that is typical of a computation performed by simple
cells in the cortex. The computational efficiency of the proposed algorithm far
exceeds existing polychronous group detection methods and is well suited for
online detection.Comment: 17 pages, 8 figure
- …