32 research outputs found

    Growing Critical: Self-Organized Criticality in a Developing Neural System

    Get PDF
    Experiments in various neural systems found avalanches: bursts of activity with characteristics typical for critical dynamics. A possible explanation for their occurrence is an underlying network that self-organizes into a critical state. We propose a simple spiking model for developing neural networks, showing how these may "grow into" criticality. Avalanches generated by our model correspond to clusters of widely applied Hawkes processes. We analytically derive the cluster size and duration distributions and find that they agree with those of experimentally observed neuronal avalanches.Comment: 6 pages, 4 figures, supplemental material: 10 pages, 7 figure

    26th Annual Computational Neuroscience Meeting (CNS*2017): Part 1

    Get PDF

    Modelling human choices: MADeM and decision‑making

    Get PDF
    Research supported by FAPESP 2015/50122-0 and DFG-GRTK 1740/2. RP and AR are also part of the Research, Innovation and Dissemination Center for Neuromathematics FAPESP grant (2013/07699-0). RP is supported by a FAPESP scholarship (2013/25667-8). ACR is partially supported by a CNPq fellowship (grant 306251/2014-0)

    The effect of noise on the transition to chaos in random neural networks

    No full text
    Networks of randomly coupled rate neurons display a transition to chaos at a critical coupling strength. In the absence of noise or time-dependent input, the transition is well understood by a dynamical mean-field theory describing the fluctuations of a single unit [H. Sompolinsky, A. Crisanti, and H. J. Sommers, Phys Rev Lett 61, 259 (1988)]. However, in nature and technology, these networks often operate in the presence of noise or time-dependent input. Here, we investigate the effect of additive white noise, rendering the dynamics stochastic and nonautonomous, on the onset of chaos in the original continuous-time random neural network model. First, we develop the corresponding dynamical mean-field theory yielding the self-consistent single-unit autocorrelation function. Then, to find the transition we consider the maximum Lyapunov exponent, which describes the asymptotic growth of infinitesimal perturbations also for stochastic dynamics. Using again the dynamical mean-field theory we derive an exact condition determining the transition from stable to chaotic dynamics. The transition is shifted to significantly larger coupling strengths than predicted by linear stability analysis of the local Jacobian matrix. We identify two different mechanisms, a static and a dynamic one, by which noise (or fluctuating input) suppresses chaos in these networks

    Transition to chaos and signal response in driven random neural networks

    No full text
    Recurrent networks of randomly coupled rate neurons display a transition to chaos at a critical coupling strength [1]. Their rich internal dynamics emerging near the transition has been associated with optimal information processing capabilities [2]. In particular, the dynamics becomes arbitrary slow at the onset of chaos similar to 'critical slowing down'. However, the interplay between a time-dependent signal, the dynamics of the network and the resulting consequences for the information processing capabilities are poorly understood.We here investigate the effect of time-varying inputs on the phase diagram of the network. In particular, using dynamic mean-field theory we study the largest Lyapunov exponent, which quantifies the rate of exponential divergence or convergence of close-by trajectories. We analytically determine the transition to chaos as a function of coupling strength and input amplitude. The transition is shifted to significantly larger coupling strengths than predicted by linear stability analysis of the local Jacobian matrix. This displacement leads to the emergence of a novel dynamical regime, which combines locally expansive dynamics with asymptotic stability. Moreover, we show that the slow internal dynamics are strongly suppressed by the external time- varying drive.To study signal processing capabilities we evaluate the capacity to reconstruct a past input from a linear readout applied to the present state, the so-called memory curve [3]. We find that for a given signal amplitude the memory capacity peaks within the novel dynamical regime. This result indicates that locally expanding while asymptotically stable dynamics is beneficial to store information about the input in the dynamics of the neural network.[1] H. Sompolinsky, A. Crisanti, and H. J. Sommers, Phys. Rev. Lett. 61, 259 (1988).[2] N. Bertschinger and T. Natschläger, Neural Comput. 16, 1413 (2004).[3] H. Jaeger, Short term memory in echo state networks, vol. 5 (GMD-Forschungszentrum Informationstechnik, 2001)
    corecore