46 research outputs found
To which extend is the "neural code" a metric ?
Here is proposed a review of the different choices to structure spike trains,
using deterministic metrics. Temporal constraints observed in biological or
computational spike trains are first taken into account. The relation with
existing neural codes (rate coding, rank coding, phase coding, ..) is then
discussed. To which extend the "neural code" contained in spike trains is
related to a metric appears to be a key point, a generalization of the
Victor-Purpura metric family being proposed for temporal constrained causal
spike trainsComment: 5 pages 5 figures Proceeding of the conference NeuroComp200
Neuromorphic Hardware In The Loop: Training a Deep Spiking Network on the BrainScaleS Wafer-Scale System
Emulating spiking neural networks on analog neuromorphic hardware offers
several advantages over simulating them on conventional computers, particularly
in terms of speed and energy consumption. However, this usually comes at the
cost of reduced control over the dynamics of the emulated networks. In this
paper, we demonstrate how iterative training of a hardware-emulated network can
compensate for anomalies induced by the analog substrate. We first convert a
deep neural network trained in software to a spiking network on the BrainScaleS
wafer-scale neuromorphic system, thereby enabling an acceleration factor of 10
000 compared to the biological time domain. This mapping is followed by the
in-the-loop training, where in each training step, the network activity is
first recorded in hardware and then used to compute the parameter updates in
software via backpropagation. An essential finding is that the parameter
updates do not have to be precise, but only need to approximately follow the
correct gradient, which simplifies the computation of updates. Using this
approach, after only several tens of iterations, the spiking network shows an
accuracy close to the ideal software-emulated prototype. The presented
techniques show that deep spiking networks emulated on analog neuromorphic
devices can attain good computational performance despite the inherent
variations of the analog substrate.Comment: 8 pages, 10 figures, submitted to IJCNN 201
Reference time in SpikeProp
Although some studies have been done on the learning algorithm for spiking neural networks SpikeProp, little has been mentioned about the required input bias neuron that sets the reference time start. This paper examines the importance of the reference time in neural networks based on temporal encoding. The findings refute previous assumptions about the reference start time
Assembly-based STDP:A New Learning Rule for Spiking Neural Networks Inspired by Biological Assemblies
Spiking Neural Networks (SNNs), An alternative to sigmoidal neural networks, include time into their operations using discrete signals called spikes. Employing spikes enables SNNs to mimic any feedforward sigmoidal neural network with lower power consumption. Recently a new type of SNN has been introduced for classification problems, known as Degree of Belonging SNN (DoB-SNN). DoB-SNN is a two-layer spiking neural network that shows significant potential as an alternative SNN architecture and learning algorithm. This paper introduces a new variant of Spike-Timing Dependent Plasticity (STDP), which is based on the assembly of neurons and expands the DoB-SNN's training algorithm for multilayer architectures. The new learning rule, known as assembly-based STDP, employs trained DoBs in each layer to train the next layer and build strong connections between neurons from the same assembly while creating inhibitory connections between neurons from different assemblies in two consecutive layers. The performance of the multilayer DoB-SNN is evaluated on five datasets from the UCI machine learning repository. Detailed comparisons on these datasets with other supervised learning algorithms show that the multilayer DoB-SNN can achieve better performance on 4/5 datasets and comparable performance on 5th when compared to multilayer algorithms that employ considerably more trainable parameters