9 research outputs found

    Self-organization of network dynamics into local quantized states

    Get PDF
    Self-organization and pattern formation in network-organized systems emerges from the collective activation and interaction of many interconnected units. A striking feature of these non-equilibrium structures is that they are often localized and robust: only a small subset of the nodes, or cell assembly, is activated. Understanding the role of cell assemblies as basic functional units in neural networks and socio-technical systems emerges as a fundamental challenge in network theory. A key open question is how these elementary building blocks emerge, and how they operate, linking structure and function in complex networks. Here we show that a network analogue of the Swift-Hohenberg continuum model---a minimal-ingredients model of nodal activation and interaction within a complex network---is able to produce a complex suite of localized patterns. Hence, the spontaneous formation of robust operational cell assemblies in complex networks can be explained as the result of self-organization, even in the absence of synaptic reinforcements. Our results show that these self-organized, local structures can provide robust functional units to understand natural and socio-technical network-organized processes.Comment: 11 pages, 4 figure

    Perspective: network-guided pattern formation of neural dynamics

    Full text link
    The understanding of neural activity patterns is fundamentally linked to an understanding of how the brain's network architecture shapes dynamical processes. Established approaches rely mostly on deviations of a given network from certain classes of random graphs. Hypotheses about the supposed role of prominent topological features (for instance, the roles of modularity, network motifs, or hierarchical network organization) are derived from these deviations. An alternative strategy could be to study deviations of network architectures from regular graphs (rings, lattices) and consider the implications of such deviations for self-organized dynamic patterns on the network. Following this strategy, we draw on the theory of spatiotemporal pattern formation and propose a novel perspective for analyzing dynamics on networks, by evaluating how the self-organized dynamics are confined by network architecture to a small set of permissible collective states. In particular, we discuss the role of prominent topological features of brain connectivity, such as hubs, modules and hierarchy, in shaping activity patterns. We illustrate the notion of network-guided pattern formation with numerical simulations and outline how it can facilitate the understanding of neural dynamics

    Unravelling topological determinants of excitable dynamics on graphs using analytical mean-field approaches

    Get PDF
    (peer-reviewed, accepted 2019-05-14)International audienceWe present our use of analytical mean-field approaches in investigating how the interplay between graph topology and excitable dynamics produce spatio-temporal patterns. We first detail the derivation of mean-field equations for a few simple model situations, mainly 3-state discrete-time excitable dynamics with an absolute or a relative excita-tion threshold. Comparison with direct numerical simulation shows that their solution satisfactorily predicts the steady-state excitation density. In contrast, they often fail to capture more complex dynamical features, however we argue that the analysis of this failure is in itself insightful, by pinpointing the key role of mechanisms neglected in the mean-field approach. Moreover, we show how second-order mean-field approaches, in which a topological object (e.g. a cycle or a hub) is considered as embedded in a mean-field surrounding, allow us to go beyond the spatial homogenization currently associated with plain mean-field calculations. The confrontation between these refined analytical predictions and simulation quantitatively evidences the specific contribution of this topological object to the dynamics. Mathematics Subject Classification (2010). Primary 05C82; Secondary 92C42

    Sustained oscillations, irregular firing, and chaotic dynamics in hierarchical modular networks with mixtures of electrophysiological cell types

    Get PDF
    The cerebral cortex exhibits neural activity even in the absence of externalstimuli. This self-sustained activity is characterized by irregular firing ofindividual neurons and population oscillations with a broad frequency range.Questions that arise in this context, are: What are the mechanismsresponsible for the existence of neuronal spiking activity in the cortexwithout external input? Do these mechanisms depend on the structural organization of the cortical connections? Do they depend onintrinsic characteristics of the cortical neurons? To approach the answers to these questions, we have used computer simulations of cortical network models. Our networks have hierarchical modular architecture and are composedof combinations of neuron models that reproduce the firing behavior of the five main cortical electrophysiological cell classes: regular spiking (RS), chattering (CH), intrinsically bursting (IB), low threshold spiking (LTS) and fast spiking (FS). The population of excitatory neurons is built of RS cells(always present) and either CH or IB cells. Inhibitoryneurons belong to the same class, either LTS or FS. Long-lived self-sustained activity states in our networksimulations display irregular single neuron firing and oscillatoryactivity similar to experimentally measured ones. The duration of self-sustained activity strongly depends on the initial conditions,suggesting a transient chaotic regime. Extensive analysis of the self-sustainedactivity states showed that their lifetime expectancy increases with the numberof network modules and is favored when the network is composed of excitatory neurons of the RS and CH classes combined with inhibitory neurons of the LTS class. These results indicate that the existence and properties of the self-sustained cortical activity states depend on both the topology of the network and the neuronal mixture that comprises the network

    Network structure and dynamics of effective models of non-equilibrium quantum transport

    Full text link
    Across all scales of the physical world, dynamical systems can often be usefully represented as abstract networks that encode the system's units and inter-unit interactions. Understanding how physical rules shape the topological structure of those networks can clarify a system's function and enhance our ability to design, guide, or control its behavior. In the emerging area of quantum network science, a key challenge lies in distinguishing between the topological properties that reflect a system's underlying physics and those that reflect the assumptions of the employed conceptual model. To elucidate and address this challenge, we study networks that represent non-equilibrium quantum-electronic transport through quantum antidot devices -- an example of an open, mesoscopic quantum system. The network representations correspond to two different models of internal antidot states: a single-particle, non-interacting model and an effective model for collective excitations including Coulomb interactions. In these networks, nodes represent accessible energy states and edges represent allowed transitions. We find that both models reflect spin conservation rules in the network topology through bipartiteness and the presence of only even-length cycles. The models diverge, however, in the minimum length of cycle basis elements, in a manner that depends on whether electrons are considered to be distinguishable. Furthermore, the two models reflect spin-conserving relaxation effects differently, as evident in both the degree distribution and the cycle-basis length distribution. Collectively, these observations serve to elucidate the relationship between network structure and physical constraints in quantum-mechanical models. More generally, our approach underscores the utility of network science in understanding the dynamics and control of quantum systems.Comment: 37 pages, including supplementary materia

    Building blocks of self-sustained activity in a simple deterministic model of excitable neural networks

    No full text
    Understanding the interplay of topology and dynamics of excitable neural networks is one of the major challenges in computational neuroscience. Here we employ a simple deterministic excitable model to explore how network-wide activation patterns are shaped by network architecture. Our observables are co-activation patterns, together with the average activity of the network and the periodicities in the excitation density. Our main results are: (1) the dependence of the correlation between the adjacency matrix and the instantaneous (zero time delay) co-activation matrix on global network features (clustering, modularity, scale-free degree distribution), (2) a correlation between the average activity and the amount of small cycles in the graph, and (3) a microscopic understanding of the contributions by 3-node and 4-node cycles to sustained activity

    Building blocks of self-sustained activity in a simple deterministic model of excitable neural networks.

    Get PDF
    Understanding the interplay of topology and dynamics of excitable neural networks is one of the major challenges in computational neuroscience. Here we employ a simple deterministic excitable model to explore how network-wide activation patterns are shaped by network architecture. Our observables are co-activation patterns, together with the average activity of the network and the periodicities in the excitation density. Our main results are: (1) the dependence of the correlation between the adjacency matrix and the instantaneous (zero time delay) co-activation matrix on global network features (clustering, modularity, scale-free degree distribution), (2) a correlation between the average activity and the amount of small cycles in the graph, and (3) a microscopic understanding of the contributions by 3-node and 4-node cycles to sustained activity

    Konnektomik von viralen Tract-tracing Verbindungen des Nervensystems der Laborratte

    Get PDF
    In dieser Dissertation wurden erstmalig virale Tract-Tracing Konnektivitäten von adulten Laborratten in einer Metastudie methodisch zusammengefasst und anschließend mit dem Netzwerkanalyseprogramm "NeuroVIISAS" analysiert. Die Auswertung des Netzwerkes beinhaltet eine globale, lokale und differentielle Konnektomanalyse. Abschließend wird die Dissertation kritisch betrachtet, und es erfolgt ein Ausblick über zukünftige Entwicklungen in der Konnektomforschung

    Modelling Structure and Dynamics of Complex Systems: Applications to Neuronal Networks

    Get PDF
    Complex systems theory is a mathematical framework for studying interconnected dynamical objects. Usually these objects themselves are by construction simple, and their temporal behavior in isolation is easily predictable, but the way they are interconnected into a network allows emergence of complex, non-obvious phenomena. The emergent phenomena and their stability are dependent on both the intrinsic dynamics of the objects, the types of interactions between the objects, and the connectivity patterns between the objects. This work focuses on the third aspect, i.e., the structure of the network, although the other two aspects are inherently present in the study as well. Tools from graph theory are applied to generate and analyze the network structure, and the effect of the structure on the network dynamics is analyzed by various methods. The objects of interest are biological and physical systems, and special attention is given to spiking neuronal networks, i.e., networks of nerve cells that communicate by transmitting and receiving action potentials. In this thesis, methods for modelling spiking neuronal networks are introduced. Different point neuron models, including the integrate-and-fire model, are presented and applied to study the collective behaviour of the neurons. Special focus is placed on the emergence of network bursts, i.e., short periods of network-wide high-frequency firing. The occurrence of this behaviour is stable in certain regimes of connection strengths. This work shows that the network bursting is found to be more frequent in locally connected networks than in non-local networks, such as randomly connected networks. To gain a deeper insight, the aspects of structure that promote the bursting behaviour are analyzed by graph-theoretic means. The clustering coefficient and the maximal eigenvalue of the connectivity matrix are found the most important measures of structure in this matter, both expressing their relevance under different structural conditions. A range of different network structures are applied to confirm this result. A special class of connectivity is studied in more detail, namely, the connectivity patterns produced by simulations of growing and interconnecting neurons placed on a 2-dimensional array. Two simulators of growth are applied for this purpose. In addition, a more abstract class of dynamical systems, the Boolean networks, are considered. These systems were originally introduced as a model for genetic regulatory networks, but have thereafter been extensively used for more general studies of complex systems. In this work, measures of information diversity and complexity are applied to several types of systems that obey Boolean dynamics. The random Boolean networks are shown to possess high temporal complexity prior to reaching an attractor. Similarly, high values of complexity are found at a transition stage of another dynamical system, the lattice gas automaton, which can be formulated using the Boolean network framework as well. The temporal maximization of the complexity near the transitions between different dynamical regimes could therefore be a more general phenomenon in complex networks. The applicability of the information-theoretic framework is also confirmed in a study of bursting neuronal networks, where different types of networks are shown to be separable by the intrinsic information distance distributions they produce. The connectivities of the networks studied in this thesis are analyzed using graph-theoretic tools. The graph theory provides a mathematical framework for studying the structure of complex systems and how it affects the system dynamics. In the studies of the nervous system, detailed maps on the connections between neurons have been collected, although such data are yet scarce and laborious to obtain experimentally. This work shows which aspects of the structure are relevant for the dynamics of spontaneously bursting neuronal networks. Such information could be useful in directing the experiments to measure only the relevant aspects of the structure instead of assessing the whole connectome. In addition, the framework of generating the network structure by animating the growth of the neurons, as presented in this thesis, could serve in simulations of the nervous system as a reliable alternative to importing the experimentally obtained connectome
    corecore