6,009 research outputs found

    Dynamical principles in neuroscience

    Full text link
    Dynamical modeling of neural systems and brain functions has a history of success over the last half century. This includes, for example, the explanation and prediction of some features of neural rhythmic behaviors. Many interesting dynamical models of learning and memory based on physiological experiments have been suggested over the last two decades. Dynamical models even of consciousness now exist. Usually these models and results are based on traditional approaches and paradigms of nonlinear dynamics including dynamical chaos. Neural systems are, however, an unusual subject for nonlinear dynamics for several reasons: (i) Even the simplest neural network, with only a few neurons and synaptic connections, has an enormous number of variables and control parameters. These make neural systems adaptive and flexible, and are critical to their biological function. (ii) In contrast to traditional physical systems described by well-known basic principles, first principles governing the dynamics of neural systems are unknown. (iii) Many different neural systems exhibit similar dynamics despite having different architectures and different levels of complexity. (iv) The network architecture and connection strengths are usually not known in detail and therefore the dynamical analysis must, in some sense, be probabilistic. (v) Since nervous systems are able to organize behavior based on sensory inputs, the dynamical modeling of these systems has to explain the transformation of temporal information into combinatorial or combinatorial-temporal codes, and vice versa, for memory and recognition. In this review these problems are discussed in the context of addressing the stimulating questions: What can neuroscience learn from nonlinear dynamics, and what can nonlinear dynamics learn from neuroscience?This work was supported by NSF Grant No. NSF/EIA-0130708, and Grant No. PHY 0414174; NIH Grant No. 1 R01 NS50945 and Grant No. NS40110; MEC BFI2003-07276, and Fundación BBVA

    The role of inhibitory feedback for information processing in thalamocortical circuits

    Get PDF
    The information transfer in the thalamus is blocked dynamically during sleep, in conjunction with the occurence of spindle waves. As the theoretical understanding of the mechanism remains incomplete, we analyze two modeling approaches for a recent experiment by Le Masson {\sl et al}. on the thalamocortical loop. In a first step, we use a conductance-based neuron model to reproduce the experiment computationally. In a second step, we model the same system by using an extended Hindmarsh-Rose model, and compare the results with the conductance-based model. In the framework of both models, we investigate the influence of inhibitory feedback on the information transfer in a typical thalamocortical oscillator. We find that our extended Hindmarsh-Rose neuron model, which is computationally less costly and thus siutable for large-scale simulations, reproduces the experiment better than the conductance-based model. Further, in agreement with the experiment of Le Masson {\sl et al}., inhibitory feedback leads to stable self-sustained oscillations which mask the incoming input, and thereby reduce the information transfer significantly.Comment: 16 pages, 15eps figures included. To appear in Physical Review

    Regular and irregular patterns of self-localized excitation in arrays of coupled phase oscillators

    Get PDF
    We study a system of phase oscillators with nonlocal coupling in a ring that supports self-organized patterns of coherence and incoherence, called chimera states. Introducing a global feedback loop, connecting the phase lag to the order parameter, we can observe chimera states also for systems with a small number of oscillators. Numerical simulations show a huge variety of regular and irregular patterns composed of localized phase slipping events of single oscillators. Using methods of classical finite dimensional chaos and bifurcation theory, we can identify the emergence of chaotic chimera states as a result of transitions to chaos via period doubling cascades, torus breakup, and intermittency. We can explain the observed phenomena by a mechanism of self-modulated excitability in a discrete excitable medium.Comment: postprint, as accepted in Chaos, 10 pages, 7 figure

    Intrinsically-generated fluctuating activity in excitatory-inhibitory networks

    Get PDF
    Recurrent networks of non-linear units display a variety of dynamical regimes depending on the structure of their synaptic connectivity. A particularly remarkable phenomenon is the appearance of strongly fluctuating, chaotic activity in networks of deterministic, but randomly connected rate units. How this type of intrinsi- cally generated fluctuations appears in more realistic networks of spiking neurons has been a long standing question. To ease the comparison between rate and spiking networks, recent works investigated the dynami- cal regimes of randomly-connected rate networks with segregated excitatory and inhibitory populations, and firing rates constrained to be positive. These works derived general dynamical mean field (DMF) equations describing the fluctuating dynamics, but solved these equations only in the case of purely inhibitory networks. Using a simplified excitatory-inhibitory architecture in which DMF equations are more easily tractable, here we show that the presence of excitation qualitatively modifies the fluctuating activity compared to purely inhibitory networks. In presence of excitation, intrinsically generated fluctuations induce a strong increase in mean firing rates, a phenomenon that is much weaker in purely inhibitory networks. Excitation moreover induces two different fluctuating regimes: for moderate overall coupling, recurrent inhibition is sufficient to stabilize fluctuations, for strong coupling, firing rates are stabilized solely by the upper bound imposed on activity, even if inhibition is stronger than excitation. These results extend to more general network architectures, and to rate networks receiving noisy inputs mimicking spiking activity. Finally, we show that signatures of the second dynamical regime appear in networks of integrate-and-fire neurons

    Noise-induced inhibitory suppression of malfunction neural oscillators

    Get PDF
    Motivated by the aim to find new medical strategies to suppress undesirable neural synchronization we study the control of oscillations in a system of inhibitory coupled noisy oscillators. Using dynamical properties of inhibition, we find regimes when the malfunction oscillations can be suppressed but the information signal of a certain frequency can be transmitted through the system. The mechanism of this phenomenon is a resonant interplay of noise and the transmission signal provided by certain value of inhibitory coupling. Analyzing a system of three or four oscillators representing neural clusters, we show that this suppression can be effectively controlled by coupling and noise amplitudes.Comment: 10 pages, 14 figure

    Metabifurcation analysis of a mean field model of the cortex

    Full text link
    Mean field models (MFMs) of cortical tissue incorporate salient features of neural masses to model activity at the population level. One of the common aspects of MFM descriptions is the presence of a high dimensional parameter space capturing neurobiological attributes relevant to brain dynamics. We study the physiological parameter space of a MFM of electrocortical activity and discover robust correlations between physiological attributes of the model cortex and its dynamical features. These correlations are revealed by the study of bifurcation plots, which show that the model responses to changes in inhibition belong to two families. After investigating and characterizing these, we discuss their essential differences in terms of four important aspects: power responses with respect to the modeled action of anesthetics, reaction to exogenous stimuli, distribution of model parameters and oscillatory repertoires when inhibition is enhanced. Furthermore, while the complexity of sustained periodic orbits differs significantly between families, we are able to show how metamorphoses between the families can be brought about by exogenous stimuli. We unveil links between measurable physiological attributes of the brain and dynamical patterns that are not accessible by linear methods. They emerge when the parameter space is partitioned according to bifurcation responses. This partitioning cannot be achieved by the investigation of only a small number of parameter sets, but is the result of an automated bifurcation analysis of a representative sample of 73,454 physiologically admissible sets. Our approach generalizes straightforwardly and is well suited to probing the dynamics of other models with large and complex parameter spaces

    Collective stability of networks of winner-take-all circuits

    Full text link
    The neocortex has a remarkably uniform neuronal organization, suggesting that common principles of processing are employed throughout its extent. In particular, the patterns of connectivity observed in the superficial layers of the visual cortex are consistent with the recurrent excitation and inhibitory feedback required for cooperative-competitive circuits such as the soft winner-take-all (WTA). WTA circuits offer interesting computational properties such as selective amplification, signal restoration, and decision making. But, these properties depend on the signal gain derived from positive feedback, and so there is a critical trade-off between providing feedback strong enough to support the sophisticated computations, while maintaining overall circuit stability. We consider the question of how to reason about stability in very large distributed networks of such circuits. We approach this problem by approximating the regular cortical architecture as many interconnected cooperative-competitive modules. We demonstrate that by properly understanding the behavior of this small computational module, one can reason over the stability and convergence of very large networks composed of these modules. We obtain parameter ranges in which the WTA circuit operates in a high-gain regime, is stable, and can be aggregated arbitrarily to form large stable networks. We use nonlinear Contraction Theory to establish conditions for stability in the fully nonlinear case, and verify these solutions using numerical simulations. The derived bounds allow modes of operation in which the WTA network is multi-stable and exhibits state-dependent persistent activities. Our approach is sufficiently general to reason systematically about the stability of any network, biological or technological, composed of networks of small modules that express competition through shared inhibition.Comment: 7 Figure

    Mammalian Brain As a Network of Networks

    Get PDF
    Acknowledgements AZ, SG and AL acknowledge support from the Russian Science Foundation (16-12-00077). Authors thank T. Kuznetsova for Fig. 6.Peer reviewedPublisher PD
    corecore