1,802 research outputs found

    Noise-induced escape in an excitable system

    Get PDF
    We consider the stochastic dynamics of escape in an excitable system, the FitzHugh-Nagumo (FHN) neuronal model, for different classes of excitability. We discuss, first, the threshold structure of the FHN model as an example of a system without a saddle state. We then develop a nonlinear (nonlocal) stability approach based on the theory of large fluctuations, including a finite-noise correction, to describe noise-induced escape in the excitable regime. We show that the threshold structure is revealed via patterns of most probable (optimal) fluctuational paths. The approach allows us to estimate the escape rate and the exit location distribution. We compare the responses of a monostable resonator and monostable integrator to stochastic input signals and to a mixture of periodic and stochastic stimuli. Unlike the commonly used local analysis of the stable state, our nonlocal approach based on optimal paths yields results that are in good agreement with direct numerical simulations of the Langevin equation

    A unified view on weakly correlated recurrent networks

    Get PDF
    The diversity of neuron models used in contemporary theoretical neuroscience to investigate specific properties of covariances raises the question how these models relate to each other. In particular it is hard to distinguish between generic properties and peculiarities due to the abstracted model. Here we present a unified view on pairwise covariances in recurrent networks in the irregular regime. We consider the binary neuron model, the leaky integrate-and-fire model, and the Hawkes process. We show that linear approximation maps each of these models to either of two classes of linear rate models, including the Ornstein-Uhlenbeck process as a special case. The classes differ in the location of additive noise in the rate dynamics, which is on the output side for spiking models and on the input side for the binary model. Both classes allow closed form solutions for the covariance. For output noise it separates into an echo term and a term due to correlated input. The unified framework enables us to transfer results between models. For example, we generalize the binary model and the Hawkes process to the presence of conduction delays and simplify derivations for established results. Our approach is applicable to general network structures and suitable for population averages. The derived averages are exact for fixed out-degree network architectures and approximate for fixed in-degree. We demonstrate how taking into account fluctuations in the linearization procedure increases the accuracy of the effective theory and we explain the class dependent differences between covariances in the time and the frequency domain. Finally we show that the oscillatory instability emerging in networks of integrate-and-fire models with delayed inhibitory feedback is a model-invariant feature: the same structure of poles in the complex frequency plane determines the population power spectra

    Dynamical principles in neuroscience

    Full text link
    Dynamical modeling of neural systems and brain functions has a history of success over the last half century. This includes, for example, the explanation and prediction of some features of neural rhythmic behaviors. Many interesting dynamical models of learning and memory based on physiological experiments have been suggested over the last two decades. Dynamical models even of consciousness now exist. Usually these models and results are based on traditional approaches and paradigms of nonlinear dynamics including dynamical chaos. Neural systems are, however, an unusual subject for nonlinear dynamics for several reasons: (i) Even the simplest neural network, with only a few neurons and synaptic connections, has an enormous number of variables and control parameters. These make neural systems adaptive and flexible, and are critical to their biological function. (ii) In contrast to traditional physical systems described by well-known basic principles, first principles governing the dynamics of neural systems are unknown. (iii) Many different neural systems exhibit similar dynamics despite having different architectures and different levels of complexity. (iv) The network architecture and connection strengths are usually not known in detail and therefore the dynamical analysis must, in some sense, be probabilistic. (v) Since nervous systems are able to organize behavior based on sensory inputs, the dynamical modeling of these systems has to explain the transformation of temporal information into combinatorial or combinatorial-temporal codes, and vice versa, for memory and recognition. In this review these problems are discussed in the context of addressing the stimulating questions: What can neuroscience learn from nonlinear dynamics, and what can nonlinear dynamics learn from neuroscience?This work was supported by NSF Grant No. NSF/EIA-0130708, and Grant No. PHY 0414174; NIH Grant No. 1 R01 NS50945 and Grant No. NS40110; MEC BFI2003-07276, and Fundación BBVA

    Sensitivity analysis of oscillator models in the space of phase-response curves: Oscillators as open systems

    Full text link
    Oscillator models are central to the study of system properties such as entrainment or synchronization. Due to their nonlinear nature, few system-theoretic tools exist to analyze those models. The paper develops a sensitivity analysis for phase-response curves, a fundamental one-dimensional phase reduction of oscillator models. The proposed theoretical and numerical analysis tools are illustrated on several system-theoretic questions and models arising in the biology of cellular rhythms

    Rhythms of the nervous system: mathematical themes and variations

    Full text link
    The nervous system displays a variety of rhythms in both waking and sleep. These rhythms have been closely associated with different behavioral and cognitive states, but it is still unknown how the nervous system makes use of these rhythms to perform functionally important tasks. To address those questions, it is first useful to understood in a mechanistic way the origin of the rhythms, their interactions, the signals which create the transitions among rhythms, and the ways in which rhythms filter the signals to a network of neurons. This talk discusses how dynamical systems have been used to investigate the origin, properties and interactions of rhythms in the nervous system. It focuses on how the underlying physiology of the cells and synapses of the networks shape the dynamics of the network in different contexts, allowing the variety of dynamical behaviors to be displayed by the same network. The work is presented using a series of related case studies on different rhythms. These case studies are chosen to highlight mathematical issues, and suggest further mathematical work to be done. The topics include: different roles of excitation and inhibition in creating synchronous assemblies of cells, different kinds of building blocks for neural oscillations, and transitions among rhythms. The mathematical issues include reduction of large networks to low dimensional maps, role of noise, global bifurcations, use of probabilistic formulations.Published versio

    Colored Motifs Reveal Computational Building Blocks in the C. elegans Brain

    Get PDF
    Background: Complex networks can often be decomposed into less complex sub-networks whose structures can give hints about the functional organization of the network as a whole. However, these structural motifs can only tell one part of the functional story because in this analysis each node and edge is treated on an equal footing. In real networks, two motifs that are topologically identical but whose nodes perform very different functions will play very different roles in the network. Methodology/Principal Findings: Here, we combine structural information derived from the topology of the neuronal network of the nematode C. elegans with information about the biological function of these nodes, thus coloring nodes by function. We discover that particular colorations of motifs are significantly more abundant in the worm brain than expected by chance, and have particular computational functions that emphasize the feed-forward structure of information processing in the network, while evading feedback loops. Interneurons are strongly over-represented among the common motifs, supporting the notion that these motifs process and transduce the information from the sensor neurons towards the muscles. Some of the most common motifs identified in the search for significant colored motifs play a crucial role in the system of neurons controlling the worm's locomotion. Conclusions/Significance: The analysis of complex networks in terms of colored motifs combines two independent data sets to generate insight about these networks that cannot be obtained with either data set alone. The method is general and should allow a decomposition of any complex networks into its functional (rather than topological) motifs as long as both wiring and functional information is available

    Limits and dynamics of randomly connected neuronal networks

    Full text link
    Networks of the brain are composed of a very large number of neurons connected through a random graph and interacting after random delays that both depend on the anatomical distance between cells. In order to comprehend the role of these random architectures on the dynamics of such networks, we analyze the mesoscopic and macroscopic limits of networks with random correlated connectivity weights and delays. We address both averaged and quenched limits, and show propagation of chaos and convergence to a complex integral McKean-Vlasov equations with distributed delays. We then instantiate a completely solvable model illustrating the role of such random architectures in the emerging macroscopic activity. We particularly focus on the role of connectivity levels in the emergence of periodic solutions

    The magnetic reversal in dot arrays recognized by the self-organized adaptive neural network

    Full text link
    The remagnetization dynamics of monolayer dot array superlattice XY 2-D spin model with dipole-dipole interactions is simulated. Within the proposed model of array, the square dots are described by the spatially modulated exchange-couplings. The dipole-dipole interactions are approximated by the hierarchical sums and spin dynamics is considered in regime of the Landau-Lifshitz equation. The simulation of reversal for 4000040 000 spins exhibits formation of nonuniform intra-dot configurations with nonlinear wave/anti-wave pairs developed at intra-dot and inter-dot scales. Several geometric and parametric dependences are calculated and compared with oversimplified four-spin model of reversal. The role of initial conditions and the occurrence of coherent rotation mode is also investigated. The emphasis is on the classification of intra-dot or inter-dot (interfacial) magnetic configurations done by adaptive neural network with varying number of neurons.Comment: 16 figure
    corecore