2,305 research outputs found

    Homeostatic plasticity and external input shape neural network dynamics

    Full text link
    In vitro and in vivo spiking activity clearly differ. Whereas networks in vitro develop strong bursts separated by periods of very little spiking activity, in vivo cortical networks show continuous activity. This is puzzling considering that both networks presumably share similar single-neuron dynamics and plasticity rules. We propose that the defining difference between in vitro and in vivo dynamics is the strength of external input. In vitro, networks are virtually isolated, whereas in vivo every brain area receives continuous input. We analyze a model of spiking neurons in which the input strength, mediated by spike rate homeostasis, determines the characteristics of the dynamical state. In more detail, our analytical and numerical results on various network topologies show consistently that under increasing input, homeostatic plasticity generates distinct dynamic states, from bursting, to close-to-critical, reverberating and irregular states. This implies that the dynamic state of a neural network is not fixed but can readily adapt to the input strengths. Indeed, our results match experimental spike recordings in vitro and in vivo: the in vitro bursting behavior is consistent with a state generated by very low network input (< 0.1%), whereas in vivo activity suggests that on the order of 1% recorded spikes are input-driven, resulting in reverberating dynamics. Importantly, this predicts that one can abolish the ubiquitous bursts of in vitro preparations, and instead impose dynamics comparable to in vivo activity by exposing the system to weak long-term stimulation, thereby opening new paths to establish an in vivo-like assay in vitro for basic as well as neurological studies.Comment: 14 pages, 8 figures, accepted at Phys. Rev.

    Gap junctions and emergent rhythms

    Get PDF
    Gap junction coupling is ubiquitous in the brain, particularly between the dendritic trees of inhibitory interneurons. Such direct non-synaptic interaction allows for direct electrical communication between cells. Unlike spike-time driven synaptic neural network models, which are event based, any model with gap junctions must necessarily involve a single neuron model that can represent the shape of an action potential. Indeed, not only do neurons communicating via gaps feel super-threshold spikes, but they also experience, and respond to, sub-threshold voltage signals. In this chapter we show that the so-called absolute integrate-and-fire model is ideally suited to such studies. At the single neuron level voltage traces for the model may be obtained in closed form, and are shown to mimic those of fast-spiking inhibitory neurons. Interestingly in the presence of a slow spike adaptation current the model is shown to support periodic bursting oscillations. For both tonic and bursting modes the phase response curve can be calculated in closed form. At the network level we focus on global gap junction coupling and show how to analyze the asynchronous firing state in large networks. Importantly, we are able to determine the emergence of non-trivial network rhythms due to strong coupling instabilities. To illustrate the use of our theoretical techniques (particularly the phase-density formalism used to determine stability) we focus on a spike adaptation induced transition from asynchronous tonic activity to synchronous bursting in a gap-junction coupled network

    Fractals in the Nervous System: conceptual Implications for Theoretical Neuroscience

    Get PDF
    This essay is presented with two principal objectives in mind: first, to document the prevalence of fractals at all levels of the nervous system, giving credence to the notion of their functional relevance; and second, to draw attention to the as yet still unresolved issues of the detailed relationships among power law scaling, self-similarity, and self-organized criticality. As regards criticality, I will document that it has become a pivotal reference point in Neurodynamics. Furthermore, I will emphasize the not yet fully appreciated significance of allometric control processes. For dynamic fractals, I will assemble reasons for attributing to them the capacity to adapt task execution to contextual changes across a range of scales. The final Section consists of general reflections on the implications of the reviewed data, and identifies what appear to be issues of fundamental importance for future research in the rapidly evolving topic of this review

    Experimental analysis and computational modeling of interburst intervals in spontaneous activity of cortical neuronal culture

    Get PDF
    Rhythmic bursting is the most striking behavior of cultured cortical networks and may start in the second week after plating. In this study, we focus on the intervals between spontaneously occurring bursts, and compare experimentally recorded values with model simulations. In the models, we use standard neurons and synapses, with physiologically plausible parameters taken from literature. All networks had a random recurrent architecture with sparsely connected neurons. The number of neurons varied between 500 and 5,000. We find that network models with homogeneous synaptic strengths produce asynchronous spiking or stable regular bursts. The latter, however, are in a range not seen in recordings. By increasing the synaptic strength in a (randomly chosen) subset of neurons, our simulations show interburst intervals (IBIs) that agree better with in vitro experiments. In this regime, called weakly synchronized, the models produce irregular network bursts, which are initiated by neurons with relatively stronger synapses. In some noise-driven networks, a subthreshold, deterministic, input is applied to neurons with strong synapses, to mimic pacemaker network drive. We show that models with such “intrinsically active neurons” (pacemaker-driven models) tend to generate IBIs that are determined by the frequency of the fastest pacemaker and do not resemble experimental data. Alternatively, noise-driven models yield realistic IBIs. Generally, we found that large-scale noise-driven neuronal network models required synaptic strengths with a bimodal distribution to reproduce the experimentally observed IBI range. Our results imply that the results obtained from small network models cannot simply be extrapolated to models of more realistic size. Synaptic strengths in large-scale neuronal network simulations need readjustment to a bimodal distribution, whereas small networks do not require such change
    corecore