1,032 research outputs found

    Inferring collective dynamical states from widely unobserved systems

    Full text link
    When assessing spatially-extended complex systems, one can rarely sample the states of all components. We show that this spatial subsampling typically leads to severe underestimation of the risk of instability in systems with propagating events. We derive a subsampling-invariant estimator, and demonstrate that it correctly infers the infectiousness of various diseases under subsampling, making it particularly useful in countries with unreliable case reports. In neuroscience, recordings are strongly limited by subsampling. Here, the subsampling-invariant estimator allows to revisit two prominent hypotheses about the brain's collective spiking dynamics: asynchronous-irregular or critical. We identify consistently for rat, cat and monkey a state that combines features of both and allows input to reverberate in the network for hundreds of milliseconds. Overall, owing to its ready applicability, the novel estimator paves the way to novel insight for the study of spatially-extended dynamical systems.Comment: 7 pages + 12 pages supplementary information + 7 supplementary figures. Title changed to match journal referenc

    Self-organization without conservation: Are neuronal avalanches generically critical?

    Full text link
    Recent experiments on cortical neural networks have revealed the existence of well-defined avalanches of electrical activity. Such avalanches have been claimed to be generically scale-invariant -- i.e. power-law distributed -- with many exciting implications in Neuroscience. Recently, a self-organized model has been proposed by Levina, Herrmann and Geisel to justify such an empirical finding. Given that (i) neural dynamics is dissipative and (ii) there is a loading mechanism "charging" progressively the background synaptic strength, this model/dynamics is very similar in spirit to forest-fire and earthquake models, archetypical examples of non-conserving self-organization, which have been recently shown to lack true criticality. Here we show that cortical neural networks obeying (i) and (ii) are not generically critical; unless parameters are fine tuned, their dynamics is either sub- or super-critical, even if the pseudo-critical region is relatively broad. This conclusion seems to be in agreement with the most recent experimental observations. The main implication of our work is that, if future experimental research on cortical networks were to support that truly critical avalanches are the norm and not the exception, then one should look for more elaborate (adaptive/evolutionary) explanations, beyond simple self-organization, to account for this.Comment: 28 pages, 11 figures, regular pape

    Adaptive self-organization in a realistic neural network model

    Full text link
    Information processing in complex systems is often found to be maximally efficient close to critical states associated with phase transitions. It is therefore conceivable that also neural information processing operates close to criticality. This is further supported by the observation of power-law distributions, which are a hallmark of phase transitions. An important open question is how neural networks could remain close to a critical point while undergoing a continual change in the course of development, adaptation, learning, and more. An influential contribution was made by Bornholdt and Rohlf, introducing a generic mechanism of robust self-organized criticality in adaptive networks. Here, we address the question whether this mechanism is relevant for real neural networks. We show in a realistic model that spike-time-dependent synaptic plasticity can self-organize neural networks robustly toward criticality. Our model reproduces several empirical observations and makes testable predictions on the distribution of synaptic strength, relating them to the critical state of the network. These results suggest that the interplay between dynamics and topology may be essential for neural information processing.Comment: 6 pages, 4 figure

    Network self-organization explains the statistics and dynamics of synaptic connection strengths in cortex

    Get PDF
    The information processing abilities of neural circuits arise from their synaptic connection patterns. Understanding the laws governing these connectivity patterns is essential for understanding brain function. The overall distribution of synaptic strengths of local excitatory connections in cortex and hippocampus is long-tailed, exhibiting a small number of synaptic connections of very large efficacy. At the same time, new synaptic connections are constantly being created and individual synaptic connection strengths show substantial fluctuations across time. It remains unclear through what mechanisms these properties of neural circuits arise and how they contribute to learning and memory. In this study we show that fundamental characteristics of excitatory synaptic connections in cortex and hippocampus can be explained as a consequence of self-organization in a recurrent network combining spike-timing-dependent plasticity (STDP), structural plasticity and different forms of homeostatic plasticity. In the network, associative synaptic plasticity in the form of STDP induces a rich-get-richer dynamics among synapses, while homeostatic mechanisms induce competition. Under distinctly different initial conditions, the ensuing self-organization produces long-tailed synaptic strength distributions matching experimental findings. We show that this self-organization can take place with a purely additive STDP mechanism and that multiplicative weight dynamics emerge as a consequence of network interactions. The observed patterns of fluctuation of synaptic strengths, including elimination and generation of synaptic connections and long-term persistence of strong connections, are consistent with the dynamics of dendritic spines found in rat hippocampus. Beyond this, the model predicts an approximately power-law scaling of the lifetimes of newly established synaptic connection strengths during development. Our results suggest that the combined action of multiple forms of neuronal plasticity plays an essential role in the formation and maintenance of cortical circuits

    Fractals in the Nervous System: conceptual Implications for Theoretical Neuroscience

    Get PDF
    This essay is presented with two principal objectives in mind: first, to document the prevalence of fractals at all levels of the nervous system, giving credence to the notion of their functional relevance; and second, to draw attention to the as yet still unresolved issues of the detailed relationships among power law scaling, self-similarity, and self-organized criticality. As regards criticality, I will document that it has become a pivotal reference point in Neurodynamics. Furthermore, I will emphasize the not yet fully appreciated significance of allometric control processes. For dynamic fractals, I will assemble reasons for attributing to them the capacity to adapt task execution to contextual changes across a range of scales. The final Section consists of general reflections on the implications of the reviewed data, and identifies what appear to be issues of fundamental importance for future research in the rapidly evolving topic of this review

    Impact of Excitation-Inhibition Balance/Imbalance on Dynamics of Cortical Neural Networks

    Get PDF
    The purpose of this research is to study the implications of Excitation/Inhibition balance and imbalance on the dynamics of ongoing (spontaneous) neural activity in the cerebral cortex region of the brain. The first research work addresses the question that why among the continuum of Excitation-Inhibition balance configurations, particular configuration should be favored? We calculate the entropy of neural network dynamics by studying an analytically tractable network of binary neurons. Our main result from this work is that the entropy maximizes at regime which is neither excitation-dominant nor inhibition-dominant but at the boundary of both. Along this boundary we see there is a trade-off between high and robust entropy. Weak synapse strengths yield entropy which is high but drops rapidly under parameter change. Strong synapse strengths, on the other hand yield a lower, but more robust, network entropy. The second research work is motivated from experiments suggest that the cerebral cortex can also operate near a critical phase transition. It has been observed in many physical systems that the governing physical laws obey a fractal symmetry near critical phase transition. This symmetry exists irrespective of the observational length-scale. Thus, we hypothesize that the laws governing cortical dynamics may obey scale-change symmetry. We test and confirm this hypothesis using two different computational models. Further, we extend the transformational scheme show that as a mouse awakens from anesthesia, scale-change symmetry emerges. The third research project is motivated by experimental observations from in motor cortex under modulation of inhibitory inputs. We found that low intensity increase (decrease) in overall inhibition in cortex causes decrease (increase) in spiking activity for some neurons. Even though, the population level activity largely unchanged. This behavior is paradoxical when compared to the status quo that says that increase (decrease) inhibition should lead to decrease (increase) in neural spiking activity. We simulated similar dynamical change to inhibitory signal modulation in neural network model. We found that this paradoxical behavior arises due to sparse connectivity and inhomogeneity in inhibitory weights

    Model-free reconstruction of neuronal network connectivity from calcium imaging signals

    Get PDF
    A systematic assessment of global neural network connectivity through direct electrophysiological assays has remained technically unfeasible even in dissociated neuronal cultures. We introduce an improved algorithmic approach based on Transfer Entropy to reconstruct approximations to network structural connectivities from network activity monitored through calcium fluorescence imaging. Based on information theory, our method requires no prior assumptions on the statistics of neuronal firing and neuronal connections. The performance of our algorithm is benchmarked on surrogate time-series of calcium fluorescence generated by the simulated dynamics of a network with known ground-truth topology. We find that the effective network topology revealed by Transfer Entropy depends qualitatively on the time-dependent dynamic state of the network (e.g., bursting or non-bursting). We thus demonstrate how conditioning with respect to the global mean activity improves the performance of our method. [...] Compared to other reconstruction strategies such as cross-correlation or Granger Causality methods, our method based on improved Transfer Entropy is remarkably more accurate. In particular, it provides a good reconstruction of the network clustering coefficient, allowing to discriminate between weakly or strongly clustered topologies, whereas on the other hand an approach based on cross-correlations would invariantly detect artificially high levels of clustering. Finally, we present the applicability of our method to real recordings of in vitro cortical cultures. We demonstrate that these networks are characterized by an elevated level of clustering compared to a random graph (although not extreme) and by a markedly non-local connectivity.Comment: 54 pages, 8 figures (+9 supplementary figures), 1 table; submitted for publicatio
    • …
    corecore