17 research outputs found

    Considerations for the mean-field approach.

    No full text
    <p>(A) Potential function , as a function of the mean firing rate and for different values of . One can appreciate the different regimes explained in the main text. Other parameters are and . (B) An Ornstein-Uhlenbeck (OU) process (see equation 10) with and . A typical return event (with return time ) and a first passage event (with first passage time ) are indicated for illustrative purposes. For the first passage time, the threshold (depicted as a blue dashed line) was fixed to 0.15.</p

    Time series showing the dynamics of our system.

    No full text
    <p>(A) Time series of the mean firing rate of the neural population for deterministic depressing synapses. The temporal evolution of the variable is also plotted for illustration purposes. (B) Histogram of the mean firing rate, which shows the existence of two well defined states of activity in and , corresponding to the down and up states respectively. The values of the parameters are and . (C) Same as (A), but with a certain level of intrinsic stochasticity on the dynamics of the synapses (concretely, we set ). The two-headed arrow shows a typical interval of permanence in the up state, denoted by . (D) Same as (B), but for . The other parameters take the same values as in (A) and (B).</p

    Statistics of permanence times in the up state for .

    No full text
    <p>(A) Probability distribution of permanence times in the up state in the regime, for and different values of . One can see that power law relations appear. (B) Dependence of with for the conditions presented in (A). The inset shows the dependence of with the parameter for the case . We have averaged over time series of duration each. Other parameters are and .</p

    Behavior of the system when the condition holds.

    No full text
    <p>(A) Time series of the variables and . (B) The same time series, but represented on the plane, illustrates the fact that is a slave variable of (although some level of inner stochasticity on is still present). The green line corresponds to the approximate expression (21), while the blue line is the numerical evaluation of the fixed point solutions of (see equation (4)). The inset shows the situation in which the system shows a bistable dynamics, analyzed in the previous section. (C) The potential function as a function of for different values of . One can appreciate the existence of only one minimum, whose location is controlled by . (D) Histograms of the mean firing rate of the system for different values of . For the cases showed in this panel, the condition is only satisfied for the case . For all panels, , and unless specifically specified.</p

    The different dynamical regimes of the model.

    No full text
    <p>(A) Phase plot which shows the different behaviors found in our system. These behaviors corresponds to time series of for which permanence times in the up state follow an exponential distribution (E), a power-law distribution with (C), or a power-law distribution with (S). In addition, a phase with a well-defined duration of the up state is found (P). In panel (B) some of these behaviors are depicted. From top to bottom one can see situations P, E and C. Other parameters are and .</p

    Autocorrelation and power spectra.

    No full text
    <p>(A) Autocorrelation function of the mean firing rate for deterministic () and stochastic () synapses, in the presence of STD. (B) Power spectra of the mean firing rate for the two cases illustrated in (A). For both panels, we have averaged over time series of duration each, and we have fixed and .</p

    Parameters used in the different figures for simulations of networks with nonlinear dendrites.

    No full text
    <p>The parameter <i>a</i> = λ<sub><i>s</i></sub> − λ<sub><i>x</i></sub> is given in terms of λ<sub><i>s</i></sub> and λ<sub><i>x</i></sub>.</p

    Learning dynamics with spiking neural networks.

    No full text
    <p>(a): Schematic hunting scene, illustrating the need for complicated dynamical systems learning and control. The hominid has to predict the motion of its prey, and to predict and control the movements of its body and the projectile. (b-h): Learning of self-sustained dynamical patterns by spiking neural networks. (b): A sine wave generated by summed, synaptically and dendritically filtered output spike trains of a PCSN with nonlinear dendrites. (c): A sample of the network’s spike trains generating the sine in (b). (d): A saw tooth pattern generated by a PCSN with saturating synapses. (e): A more complicated smooth pattern generated by both architectures (blue: nonlinear dendrites, red: saturating synapses). (f-h): Learning of chaotic dynamics (Lorenz system), with a PCSN with nonlinear dendrites. (f): The spiking network imitates an example trajectory of the Lorenz system during training (blue); it continues generating the dynamics during testing (red). (g): Detailed view of (f) highlighting how the example trajectory (yellow) is imitated during training and continued during testing. (h): The spiking network approximates not explicitly trained quantitative dynamical features, like the tent map between subsequent maxima of the z-coordinate. The ideal tent map (yellow) is closely approximated by the tent map generated by the PCSN (red). The spiking network sporadically generates errors, cf. the larger loop in (f) and the outlier points in (h). Panel (h) shows a ten times longer time series than (f), with three errors.</p

    Parameters used in the different figures for simulations of networks with saturating synapses.

    No full text
    <p>Parameters used in the different figures for simulations of networks with saturating synapses.</p

    Setups used to learn versatile nonlinear computations with spiking neural networks.

    No full text
    <p>(a) A static <i>continuous signal coding spiking neural network</i> (<i>CSN</i>, gray shaded) serves as a spiking computational reservoir with high signal-to-noise ratio. The results of computations on current and past external inputs <b>I</b><sub>e</sub> can be extracted by simple neuron-like readouts. These linearly combine somatic inputs generated by saturating synapses or nonlinear dendrites, (red), to output signals <b>z</b> (Eqs (<a href="http://www.ploscompbiol.org/article/info:doi/10.1371/journal.pcbi.1004895#pcbi.1004895.e044" target="_blank">13</a>, <a href="http://www.ploscompbiol.org/article/info:doi/10.1371/journal.pcbi.1004895#pcbi.1004895.e046" target="_blank">14</a>)). The output weights <b>w</b><sup>o</sup> are learned such that <b>z</b> approximates the desired continuous target signals. (b) <i>Plastic continuous signal coding spiking neural networks (PCSNs)</i> possess a loop that feeds the outputs <b>z</b> back via static connections as an additional input (blue, <a href="http://www.ploscompbiol.org/article/info:doi/10.1371/journal.pcbi.1004895#pcbi.1004895.e050" target="_blank">Eq (15)</a>). Such networks have increased computational capabilities allowing them to, e.g., generate desired self-sustained activity. (c) The feedback loop can be incorporated into the recurrent network via plastic recurrent connections (red in gray shaded area).</p
    corecore