15 research outputs found

    Illustration of the postsynaptic kernels used in this analysis, and an example of a resulting postsynaptic membrane potential.

    No full text
    <p>(A) The time course of the postsynaptic current kernel <i>α</i>. (B) The PSP kernel <i>ϵ</i>. (C) The reset kernel <i>κ</i>. (D) The resulting membrane potential <i>u</i><sub><i>i</i></sub> as defined by <a href="http://www.plosone.org/article/info:doi/10.1371/journal.pone.0161335#pone.0161335.e001" target="_blank">Eq (1)</a>. In this example, a single presynaptic spike is received at <i>t</i><sub><i>j</i></sub> = 0 ms, and a postsynaptic spike is generated at <i>t</i><sub><i>i</i></sub> = 4 ms from selectively tuning both the synaptic weight <i>w</i><sub><i>ij</i></sub> and firing threshold <i>ϑ</i> values. We take <i>C</i> = 2.5 nF for the neuron’s membrane capacitance, such that the postsynaptic current attains a maximum value of 1nA.</p

    The classification performance of each learning rule as a function of the number of input patterns when learning to classify <i>p</i> patterns into five separate classes.

    No full text
    <p>Each input class was identified using a single, unique target output spike timing, which a single postsynaptic neuron had to learn to match to within 1ms. <i>Left:</i> The averaged classification performance 〈<i>P</i><sub><i>c</i></sub>〉 for a network containing <i>n</i><sub><i>i</i></sub> = 200, 400 and 600 presynaptic neurons. <i>Right:</i> The corresponding number of epochs taken by the network to reach a performance level of 90%. More than 500 epochs was considered a failure by the network to learn all the patterns at the required performance level. Results were averaged over 20 independent runs, and error bars show the standard deviation.</p

    The classification performance of each learning rule as a function of the number of target output spikes used to identify input patterns.

    No full text
    <p>The network was tasked when classifying 10 input patterns into 5 separate classes. Correct classifications were considered when the number of actual output spikes fired by a single postsynaptic neuron matched that of its target, and each actual spike fell within 1ms of its corresponding target timing. In this case, a network containing 200 presynaptic neurons was trained over an extended 1000 epochs to allow for decreased learning speed, and results were averaged over 20 independent runs.</p

    The minimum target output firing time , relative to an input spike time, that can accurately be learned using the FILT rule, plotted as a function of the filter time constant <i>Ï„</i><sub><i>q</i></sub>.

    No full text
    <p>This figure makes predictions based on a single synapse with an input spike at 0ms. At <i>Ï„</i><sub><i>q</i></sub> = 0 ms the minimum time is equivalent to <i>s</i><sup>peak</sup>, that is the lag time corresponding to the maximum value of the PSP kernel, and FILT becomes equivalent to INST. As a reference, the value <i>Ï„</i><sub><i>q</i></sub> = 10 ms was selected for use in our computer simulations, which was indicated to give optimal performance on preliminary runs.</p

    Goal-Oriented Learning in Spiking Neural Networks (Human Brain Project -- Task 4.3.5)

    No full text
    <p>Presentation for Task 4.2.3 (SPIKEFRAME) for the HBP Summit, Madrid.</p

    The memory capacity <i>α</i><sub><i>m</i></sub> of each learning rule as a function of the required output spike timing precision.

    No full text
    <p>The network contained a single postsynaptic neuron, and was trained to classify input patterns into five separate classes within 500 epochs. Memory capacity values were determined based on networks containing <i>n</i><sub><i>i</i></sub> = 200, 400 and 600 presynaptic neurons. Results were averaged over 20 independent runs.</p

    Spike pattern transformation learning in structured spiking neural networks

    No full text
    <p>Poster presented at the Bernstein Meeting 2015 in Heidelberg 14-17 Septemer 2015. This is the poster corresponding to abstract doi:10.12751/nncn.bc2015.0022.</p> <p>A preprint of our Neural Computation article (in press as of Sept 2015) can be found at http://arxiv.org/abs/1503.09129</p

    Averaged synaptic weight values before and after network training, corresponding to the same experiment of Fig 5.

    No full text
    <p>The input synaptic weight values are plotted in chronological order, with respect to their associated firing time. (A) The distribution of weights before learning. (B) Post training under the INST rule. (C) Post training under the the FILT rule. The gold coloured vertical lines indicate the target postsynaptic firing times. Note the different scales of A, B and C. Results were averaged based on 40 independent runs. The design of this figure is inspired from [<a href="http://www.plosone.org/article/info:doi/10.1371/journal.pone.0161335#pone.0161335.ref009" target="_blank">9</a>].</p

    Dependence of synaptic weight change Δ<i>w</i> on the relative timing difference between a target postsynaptic spike and input presynaptic spike: <i>t</i><sup>ref</sup> and <i>t</i><sup>pre</sup>, respectively.

    No full text
    <p>(A) Leaning window of the INST rule. (B) Learning window of the FILT rule. The peak Δ<i>w</i> values for INST and FILT correspond to relative timings of just under 7 and 3 ms, respectively. Both panels show the weight change in the absence of an actual postsynaptic spike.</p

    Two postsynaptic neurons trained under the proposed synaptic plasticity rules, that learned to map between a single, fixed input spike pattern and a four-spike target output train.

    No full text
    <p>(A) A spike raster of an arbitrarily generated input pattern, lasting 200ms, where each dot represents a spike. (B) Actual output spike rasters corresponding to the INST rule (left) and the FILT rule (right) in response to the repeated presentation of the input pattern. Target output spike times are indicated by crosses. (C) The evolution of the vRD for each learning rule, taken as a moving average over 40 independent simulation runs. The shaded regions show the standard deviation.</p
    corecore