68 research outputs found

    An Efficient Supervised Training Algorithm for Multilayer Spiking Neural Networks

    Get PDF
    <div><p>The spiking neural networks (SNNs) are the third generation of neural networks and perform remarkably well in cognitive tasks such as pattern recognition. The spike emitting and information processing mechanisms found in biological cognitive systems motivate the application of the hierarchical structure and temporal encoding mechanism in spiking neural networks, which have exhibited strong computational capability. However, the hierarchical structure and temporal encoding approach require neurons to process information serially in space and time respectively, which reduce the training efficiency significantly. For training the hierarchical SNNs, most existing methods are based on the traditional back-propagation algorithm, inheriting its drawbacks of the gradient diffusion and the sensitivity on parameters. To keep the powerful computation capability of the hierarchical structure and temporal encoding mechanism, but to overcome the low efficiency of the existing algorithms, a new training algorithm, the Normalized Spiking Error Back Propagation (NSEBP) is proposed in this paper. In the feedforward calculation, the output spike times are calculated by solving the quadratic function in the spike response model instead of detecting postsynaptic voltage states at all time points in traditional algorithms. Besides, in the feedback weight modification, the computational error is propagated to previous layers by the presynaptic spike jitter instead of the gradient decent rule, which realizes the layer-wised training. Furthermore, our algorithm investigates the mathematical relation between the weight variation and voltage error change, which makes the normalization in the weight modification applicable. Adopting these strategies, our algorithm outperforms the traditional SNN multi-layer algorithms in terms of learning efficiency and parameter sensitivity, that are also demonstrated by the comprehensive experimental results in this paper.</p></div

    The output spike scopes.

    No full text
    <p>The output spike scopes generated by the input spike to each layer.</p

    The convergent process of our algorithm.

    No full text
    <p>The convergent process of our algorithm with the number of hidden neurons 5, 10, 15. All of these simulations achieve accuracy 1.</p

    Computational analysis of the relationship between allergenicity and digestibility of allergenic proteins in simulated gastric fluid-2

    No full text
    <p><b>Copyright information:</b></p><p>Taken from "Computational analysis of the relationship between allergenicity and digestibility of allergenic proteins in simulated gastric fluid"</p><p>http://www.biomedcentral.com/1471-2105/8/375</p><p>BMC Bioinformatics 2007;8():375-375.</p><p>Published online 9 Oct 2007</p><p>PMCID:PMC2099448.</p><p></p>of which the digestibility is lower than the threshold while to the right it is higher. The unit for digestibility is the amino acid residue

    Network structure for classification.

    No full text
    <p>Network structure consisting of 12 ∗ <i>F</i> input neurons, <i>F</i> hidden neurons and one output neuron, where <i>F</i> is the number of features.</p

    The network structure in our simulation.

    No full text
    <p>There are 50 input neurons, 100 hidden neurons, and one output neuron.</p

    The voltage in the time scope .

    No full text
    <p>The voltage <i>ϵ</i><sub><i>j</i></sub>(<i>s</i><sub><i>j</i></sub>) caused by the input spike is above <i>ϑ</i><sub><i>v</i></sub> when . The voltage of the input is set to 0 at time <i>t</i> if this is not in the interval [<i>t</i><sub>1</sub>, <i>t</i><sub>2</sub>].</p

    The error and its assignment in our algorithm.

    No full text
    <p>The error <i>err</i> is assigned to two parts, among which is assigned to the current layer for weight modification, and is propagated to previous (n − 1) layers.</p

    Training performance on various situations.

    No full text
    <p>A: Simulation results on different time lengths fixing the input spike rate to 10 Hz. B: Simulation results on different input firing rates with the time length 500 ms.</p

    Training time of one epoch for various time lengths.

    No full text
    <p>Training time of one epoch for various time lengths.</p
    corecore