241 research outputs found

    Many Attractors, Long Chaotic Transients, and Failure in Small-World Networks of Excitable Neurons

    Get PDF
    We study the dynamical states that emerge in a small-world network of recurrently coupled excitable neurons through both numerical and analytical methods. These dynamics depend in large part on the fraction of long-range connections or `short-cuts' and the delay in the neuronal interactions. Persistent activity arises for a small fraction of `short-cuts', while a transition to failure occurs at a critical value of the `short-cut' density. The persistent activity consists of multi-stable periodic attractors, the number of which is at least on the order of the number of neurons in the network. For long enough delays, network activity at high `short-cut' densities is shown to exhibit exceedingly long chaotic transients whose failure-times averaged over many network configurations follow a stretched exponential. We show how this functional form arises in the ensemble-averaged activity if each network realization has a characteristic failure-time which is exponentially distributed.Comment: 14 pages 23 figure

    Limits and dynamics of stochastic neuronal networks with random heterogeneous delays

    Full text link
    Realistic networks display heterogeneous transmission delays. We analyze here the limits of large stochastic multi-populations networks with stochastic coupling and random interconnection delays. We show that depending on the nature of the delays distributions, a quenched or averaged propagation of chaos takes place in these networks, and that the network equations converge towards a delayed McKean-Vlasov equation with distributed delays. Our approach is mostly fitted to neuroscience applications. We instantiate in particular a classical neuronal model, the Wilson and Cowan system, and show that the obtained limit equations have Gaussian solutions whose mean and standard deviation satisfy a closed set of coupled delay differential equations in which the distribution of delays and the noise levels appear as parameters. This allows to uncover precisely the effects of noise, delays and coupling on the dynamics of such heterogeneous networks, in particular their role in the emergence of synchronized oscillations. We show in several examples that not only the averaged delay, but also the dispersion, govern the dynamics of such networks.Comment: Corrected misprint (useless stopping time) in proof of Lemma 1 and clarified a regularity hypothesis (remark 1

    A Markovian event-based framework for stochastic spiking neural networks

    Full text link
    In spiking neural networks, the information is conveyed by the spike times, that depend on the intrinsic dynamics of each neuron, the input they receive and on the connections between neurons. In this article we study the Markovian nature of the sequence of spike times in stochastic neural networks, and in particular the ability to deduce from a spike train the next spike time, and therefore produce a description of the network activity only based on the spike times regardless of the membrane potential process. To study this question in a rigorous manner, we introduce and study an event-based description of networks of noisy integrate-and-fire neurons, i.e. that is based on the computation of the spike times. We show that the firing times of the neurons in the networks constitute a Markov chain, whose transition probability is related to the probability distribution of the interspike interval of the neurons in the network. In the cases where the Markovian model can be developed, the transition probability is explicitly derived in such classical cases of neural networks as the linear integrate-and-fire neuron models with excitatory and inhibitory interactions, for different types of synapses, possibly featuring noisy synaptic integration, transmission delays and absolute and relative refractory period. This covers most of the cases that have been investigated in the event-based description of spiking deterministic neural networks

    Multiscale Computations on Neural Networks: From the Individual Neuron Interactions to the Macroscopic-Level Analysis

    Full text link
    We show how the Equation-Free approach for multi-scale computations can be exploited to systematically study the dynamics of neural interactions on a random regular connected graph under a pairwise representation perspective. Using an individual-based microscopic simulator as a black box coarse-grained timestepper and with the aid of simulated annealing we compute the coarse-grained equilibrium bifurcation diagram and analyze the stability of the stationary states sidestepping the necessity of obtaining explicit closures at the macroscopic level. We also exploit the scheme to perform a rare-events analysis by estimating an effective Fokker-Planck describing the evolving probability density function of the corresponding coarse-grained observables

    Hexagonal patterns in a model for rotating convection

    Get PDF
    We study a model equation that mimics convection under rotation in a fluid with temperature- dependent properties (non-Boussinesq (NB)), high Prandtl number and idealized boundary conditions. It is based on a model equation proposed by Segel [1965] by adding rotation terms that lead to a Kuppers-Lortz instability [Kuppers & Lortz, 1969] and can develop into oscillating hexagons. We perform a weakly nonlinear analysis to find out explicitly the coefficients in the amplitude equation as functions of the rotation rate. These equations describe hexagons and os- cillating hexagons quite well, and include the Busse?Heikes (BH) model [Busse & Heikes, 1980] as a particular case. The sideband instabilities as well as short wavelength instabilities of such hexagonal patterns are discussed and the threshold for oscillating hexagons is determined

    Response of electrically coupled spiking neurons: a cellular automaton approach

    Full text link
    Experimental data suggest that some classes of spiking neurons in the first layers of sensory systems are electrically coupled via gap junctions or ephaptic interactions. When the electrical coupling is removed, the response function (firing rate {\it vs.} stimulus intensity) of the uncoupled neurons typically shows a decrease in dynamic range and sensitivity. In order to assess the effect of electrical coupling in the sensory periphery, we calculate the response to a Poisson stimulus of a chain of excitable neurons modeled by nn-state Greenberg-Hastings cellular automata in two approximation levels. The single-site mean field approximation is shown to give poor results, failing to predict the absorbing state of the lattice, while the results for the pair approximation are in good agreement with computer simulations in the whole stimulus range. In particular, the dynamic range is substantially enlarged due to the propagation of excitable waves, which suggests a functional role for lateral electrical coupling. For probabilistic spike propagation the Hill exponent of the response function is α=1\alpha=1, while for deterministic spike propagation we obtain α=1/2\alpha=1/2, which is close to the experimental values of the psychophysical Stevens exponents for odor and light intensities. Our calculations are in qualitative agreement with experimental response functions of ganglion cells in the mammalian retina.Comment: 11 pages, 8 figures, to appear in the Phys. Rev.

    Discovering universal statistical laws of complex networks

    Full text link
    Different network models have been suggested for the topology underlying complex interactions in natural systems. These models are aimed at replicating specific statistical features encountered in real-world networks. However, it is rarely considered to which degree the results obtained for one particular network class can be extrapolated to real-world networks. We address this issue by comparing different classical and more recently developed network models with respect to their generalisation power, which we identify with large structural variability and absence of constraints imposed by the construction scheme. After having identified the most variable networks, we address the issue of which constraints are common to all network classes and are thus suitable candidates for being generic statistical laws of complex networks. In fact, we find that generic, not model-related dependencies between different network characteristics do exist. This allows, for instance, to infer global features from local ones using regression models trained on networks with high generalisation power. Our results confirm and extend previous findings regarding the synchronisation properties of neural networks. Our method seems especially relevant for large networks, which are difficult to map completely, like the neural networks in the brain. The structure of such large networks cannot be fully sampled with the present technology. Our approach provides a method to estimate global properties of under-sampled networks with good approximation. Finally, we demonstrate on three different data sets (C. elegans' neuronal network, R. prowazekii's metabolic network, and a network of synonyms extracted from Roget's Thesaurus) that real-world networks have statistical relations compatible with those obtained using regression models

    Nonlinear diffusion models of detection

    Get PDF

    Mechanisms explaining transitions between tonic and phasic firing in neuronal populations as predicted by a low dimensional firing rate model

    Get PDF
    Several firing patterns experimentally observed in neural populations have been successfully correlated to animal behavior. Population bursting, hereby regarded as a period of high firing rate followed by a period of quiescence, is typically observed in groups of neurons during behavior. Biophysical membrane-potential models of single cell bursting involve at least three equations. Extending such models to study the collective behavior of neural populations involves thousands of equations and can be very expensive computationally. For this reason, low dimensional population models that capture biophysical aspects of networks are needed. \noindent The present paper uses a firing-rate model to study mechanisms that trigger and stop transitions between tonic and phasic population firing. These mechanisms are captured through a two-dimensional system, which can potentially be extended to include interactions between different areas of the nervous system with a small number of equations. The typical behavior of midbrain dopaminergic neurons in the rodent is used as an example to illustrate and interpret our results. \noindent The model presented here can be used as a building block to study interactions between networks of neurons. This theoretical approach may help contextualize and understand the factors involved in regulating burst firing in populations and how it may modulate distinct aspects of behavior.Comment: 25 pages (including references and appendices); 12 figures uploaded as separate file
    corecore