70 research outputs found

    Voltage imaging of waking mouse cortex reveals emergence of critical neuronal dynamics.

    Get PDF
    Complex cognitive processes require neuronal activity to be coordinated across multiple scales, ranging from local microcircuits to cortex-wide networks. However, multiscale cortical dynamics are not well understood because few experimental approaches have provided sufficient support for hypotheses involving multiscale interactions. To address these limitations, we used, in experiments involving mice, genetically encoded voltage indicator imaging, which measures cortex-wide electrical activity at high spatiotemporal resolution. Here we show that, as mice recovered from anesthesia, scale-invariant spatiotemporal patterns of neuronal activity gradually emerge. We show for the first time that this scale-invariant activity spans four orders of magnitude in awake mice. In contrast, we found that the cortical dynamics of anesthetized mice were not scale invariant. Our results bridge empirical evidence from disparate scales and support theoretical predictions that the awake cortex operates in a dynamical regime known as criticality. The criticality hypothesis predicts that small-scale cortical dynamics are governed by the same principles as those governing larger-scale dynamics. Importantly, these scale-invariant principles also optimize certain aspects of information processing. Our results suggest that during the emergence from anesthesia, criticality arises as information processing demands increase. We expect that, as measurement tools advance toward larger scales and greater resolution, the multiscale framework offered by criticality will continue to provide quantitative predictions and insight on how neurons, microcircuits, and large-scale networks are dynamically coordinated in the brain

    Avalanches in self-organized critical neural networks: A minimal model for the neural SOC universality class

    Full text link
    The brain keeps its overall dynamics in a corridor of intermediate activity and it has been a long standing question what possible mechanism could achieve this task. Mechanisms from the field of statistical physics have long been suggesting that this homeostasis of brain activity could occur even without a central regulator, via self-organization on the level of neurons and their interactions, alone. Such physical mechanisms from the class of self-organized criticality exhibit characteristic dynamical signatures, similar to seismic activity related to earthquakes. Measurements of cortex rest activity showed first signs of dynamical signatures potentially pointing to self-organized critical dynamics in the brain. Indeed, recent more accurate measurements allowed for a detailed comparison with scaling theory of non-equilibrium critical phenomena, proving the existence of criticality in cortex dynamics. We here compare this new evaluation of cortex activity data to the predictions of the earliest physics spin model of self-organized critical neural networks. We find that the model matches with the recent experimental data and its interpretation in terms of dynamical signatures for criticality in the brain. The combination of signatures for criticality, power law distributions of avalanche sizes and durations, as well as a specific scaling relationship between anomalous exponents, defines a universality class characteristic of the particular critical phenomenon observed in the neural experiments. The spin model is a candidate for a minimal model of a self-organized critical adaptive network for the universality class of neural criticality. As a prototype model, it provides the background for models that include more biological details, yet share the same universality class characteristic of the homeostasis of activity in the brain.Comment: 17 pages, 5 figure

    Cascades and Cognitive State: Focused Attention Incurs Subcritical Dynamics

    No full text
    The analysis of neuronal avalanches supports the hypothesis that the human cortex operates with critical neural dynamics. Here, we investigate the relationship between cascades of activity in electroencephalogram data, cognitive state, and reaction time in humans using a multimodal approach. We recruited 18 healthy volunteers for the acquisition of simultaneous electroencephalogram and functional magnetic resonance imaging during both rest and during a visuomotor cognitive task. We compared distributions of electroencephalogram-derived cascades to reference power laws for task and rest conditions. We then explored the large-scale spatial correspondence of these cascades in the simultaneously acquired functional magnetic resonance imaging data. Furthermore, we investigated whether individual variability in reaction times is associated with the amount of deviation from power law form. We found that while resting state cascades are associated with approximate power law form, the task state is associated with subcritical dynamics. Furthermore, we found that electroencephalogram cascades are related to blood oxygen level-dependent activation, predominantly in sensorimotor brain regions. Finally, we found that decreased reaction times during the task condition are associated with increased proximity to power law form of cascade distributions. These findings suggest that the resting state is associated with near-critical dynamics, in which a high dynamic range and a large repertoire of brain states may be advantageous. In contrast, a focused cognitive task induces subcritical dynamics, which is associated with a lower dynamic range, which in turn may reduce elements of interference affecting task performance

    Statistical Analyses Support Power Law Distributions Found in Neuronal Avalanches

    Get PDF
    The size distribution of neuronal avalanches in cortical networks has been reported to follow a power law distribution with exponent close to −1.5, which is a reflection of long-range spatial correlations in spontaneous neuronal activity. However, identifying power law scaling in empirical data can be difficult and sometimes controversial. In the present study, we tested the power law hypothesis for neuronal avalanches by using more stringent statistical analyses. In particular, we performed the following steps: (i) analysis of finite-size scaling to identify scale-free dynamics in neuronal avalanches, (ii) model parameter estimation to determine the specific exponent of the power law, and (iii) comparison of the power law to alternative model distributions. Consistent with critical state dynamics, avalanche size distributions exhibited robust scaling behavior in which the maximum avalanche size was limited only by the spatial extent of sampling (“finite size” effect). This scale-free dynamics suggests the power law as a model for the distribution of avalanche sizes. Using both the Kolmogorov-Smirnov statistic and a maximum likelihood approach, we found the slope to be close to −1.5, which is in line with previous reports. Finally, the power law model for neuronal avalanches was compared to the exponential and to various heavy-tail distributions based on the Kolmogorov-Smirnov distance and by using a log-likelihood ratio test. Both the power law distribution without and with exponential cut-off provided significantly better fits to the cluster size distributions in neuronal avalanches than the exponential, the lognormal and the gamma distribution. In summary, our findings strongly support the power law scaling in neuronal avalanches, providing further evidence for critical state dynamics in superficial layers of cortex

    Avalanches in a Stochastic Model of Spiking Neurons

    Get PDF
    Neuronal avalanches are a form of spontaneous activity widely observed in cortical slices and other types of nervous tissue, both in vivo and in vitro. They are characterized by irregular, isolated population bursts when many neurons fire together, where the number of spikes per burst obeys a power law distribution. We simulate, using the Gillespie algorithm, a model of neuronal avalanches based on stochastic single neurons. The network consists of excitatory and inhibitory neurons, first with all-to-all connectivity and later with random sparse connectivity. Analyzing our model using the system size expansion, we show that the model obeys the standard Wilson-Cowan equations for large network sizes ( neurons). When excitation and inhibition are closely balanced, networks of thousands of neurons exhibit irregular synchronous activity, including the characteristic power law distribution of avalanche size. We show that these avalanches are due to the balanced network having weakly stable functionally feedforward dynamics, which amplifies some small fluctuations into the large population bursts. Balanced networks are thought to underlie a variety of observed network behaviours and have useful computational properties, such as responding quickly to changes in input. Thus, the appearance of avalanches in such functionally feedforward networks indicates that avalanches may be a simple consequence of a widely present network structure, when neuron dynamics are noisy. An important implication is that a network need not be “critical” for the production of avalanches, so experimentally observed power laws in burst size may be a signature of noisy functionally feedforward structure rather than of, for example, self-organized criticality

    A Computational Study on the Role of Gap Junctions and Rod Ih Conductance in the Enhancement of the Dynamic Range of the Retina

    Get PDF
    Recent works suggest that one of the roles of gap junctions in sensory systems is to enhance their dynamic range by avoiding early saturation in the first processing stages. In this work, we use a minimal conductance-based model of the ON rod pathways in the vertebrate retina to study the effects of electrical synaptic coupling via gap junctions among rods and among AII amacrine cells on the dynamic range of the retina. The model is also used to study the effects of the maximum conductance of rod hyperpolarization activated current Ih on the dynamic range of the retina, allowing a study of the interrelations between this intrinsic membrane parameter with those two retina connectivity characteristics. Our results show that for realistic values of Ih conductance the dynamic range is enhanced by rod-rod coupling, and that AII-AII coupling is less relevant to dynamic range amplification in comparison with receptor coupling. Furthermore, a plot of the retina output response versus input intensity for the optimal parameter configuration is well fitted by a power law with exponent . The results are consistent with predictions of more theoretical works and suggest that the earliest expression of gap junctions along the rod pathways, together with appropriate values of rod Ih conductance, has the highest impact on vertebrate retina dynamic range enhancement

    The emergence of synaesthesia in a Neuronal Network Model via changes in perceptual sensitivity and plasticity

    Get PDF
    Synaesthesia is an unusual perceptual experience in which an inducer stimulus triggers a percept in a different domain in addition to its own. To explore the conditions under which synaesthesia evolves, we studied a neuronal network model that represents two recurrently connected neural systems. The interactions in the network evolve according to learning rules that optimize sensory sensitivity. We demonstrate several scenarios, such as sensory deprivation or heightened plasticity, under which synaesthesia can evolve even though the inputs to the two systems are statistically independent and the initial cross-talk interactions are zero. Sensory deprivation is the known causal mechanism for acquired synaesthesia and increased plasticity is implicated in developmental synaesthesia. The model unifies different causes of synaesthesia within a single theoretical framework and repositions synaesthesia not as some quirk of aberrant connectivity, but rather as a functional brain state that can emerge as a consequence of optimising sensory information processing

    Self-Organized Criticality in Developing Neuronal Networks

    Get PDF
    Recently evidence has accumulated that many neural networks exhibit self-organized criticality. In this state, activity is similar across temporal scales and this is beneficial with respect to information flow. If subcritical, activity can die out, if supercritical epileptiform patterns may occur. Little is known about how developing networks will reach and stabilize criticality. Here we monitor the development between 13 and 95 days in vitro (DIV) of cortical cell cultures (n = 20) and find four different phases, related to their morphological maturation: An initial low-activity state (≈19 DIV) is followed by a supercritical (≈20 DIV) and then a subcritical one (≈36 DIV) until the network finally reaches stable criticality (≈58 DIV). Using network modeling and mathematical analysis we describe the dynamics of the emergent connectivity in such developing systems. Based on physiological observations, the synaptic development in the model is determined by the drive of the neurons to adjust their connectivity for reaching on average firing rate homeostasis. We predict a specific time course for the maturation of inhibition, with strong onset and delayed pruning, and that total synaptic connectivity should be strongly linked to the relative levels of excitation and inhibition. These results demonstrate that the interplay between activity and connectivity guides developing networks into criticality suggesting that this may be a generic and stable state of many networks in vivo and in vitro
    corecore