921 research outputs found

    Perspective: network-guided pattern formation of neural dynamics

    Full text link
    The understanding of neural activity patterns is fundamentally linked to an understanding of how the brain's network architecture shapes dynamical processes. Established approaches rely mostly on deviations of a given network from certain classes of random graphs. Hypotheses about the supposed role of prominent topological features (for instance, the roles of modularity, network motifs, or hierarchical network organization) are derived from these deviations. An alternative strategy could be to study deviations of network architectures from regular graphs (rings, lattices) and consider the implications of such deviations for self-organized dynamic patterns on the network. Following this strategy, we draw on the theory of spatiotemporal pattern formation and propose a novel perspective for analyzing dynamics on networks, by evaluating how the self-organized dynamics are confined by network architecture to a small set of permissible collective states. In particular, we discuss the role of prominent topological features of brain connectivity, such as hubs, modules and hierarchy, in shaping activity patterns. We illustrate the notion of network-guided pattern formation with numerical simulations and outline how it can facilitate the understanding of neural dynamics

    Critical dynamics on a large human Open Connectome network

    Get PDF
    Extended numerical simulations of threshold models have been performed on a human brain network with N=836733 connected nodes available from the Open Connectome project. While in case of simple threshold models a sharp discontinuous phase transition without any critical dynamics arises, variable thresholds models exhibit extended power-law scaling regions. This is attributed to fact that Griffiths effects, stemming from the topological/interaction heterogeneity of the network, can become relevant if the input sensitivity of nodes is equalized. I have studied the effects effects of link directness, as well as the consequence of inhibitory connections. Non-universal power-law avalanche size and time distributions have been found with exponents agreeing with the values obtained in electrode experiments of the human brain. The dynamical critical region occurs in an extended control parameter space without the assumption of self organized criticality.Comment: 7 pages, 6 figures, accepted version to appear in PR

    Griffiths phases and localization in hierarchical modular networks

    Get PDF
    We study variants of hierarchical modular network models suggested by Kaiser and Hilgetag [Frontiers in Neuroinformatics, 4 (2010) 8] to model functional brain connectivity, using extensive simulations and quenched mean-field theory (QMF), focusing on structures with a connection probability that decays exponentially with the level index. Such networks can be embedded in two-dimensional Euclidean space. We explore the dynamic behavior of the contact process (CP) and threshold models on networks of this kind, including hierarchical trees. While in the small-world networks originally proposed to model brain connectivity, the topological heterogeneities are not strong enough to induce deviations from mean-field behavior, we show that a Griffiths phase can emerge under reduced connection probabilities, approaching the percolation threshold. In this case the topological dimension of the networks is finite, and extended regions of bursty, power-law dynamics are observed. Localization in the steady state is also shown via QMF. We investigate the effects of link asymmetry and coupling disorder, and show that localization can occur even in small-world networks with high connectivity in case of link disorder.Comment: 18 pages, 20 figures, accepted version in Scientific Report

    Hierarchy and Dynamics of Neural Networks

    Get PDF
    Contains fulltext : 88364.pdf (publisher's version ) (Open Access

    From Caenorhabditis elegans to the Human Connectome: A Specific Modular Organisation Increases Metabolic, Functional, and Developmental Efficiency

    Full text link
    The connectome, or the entire connectivity of a neural system represented by network, ranges various scales from synaptic connections between individual neurons to fibre tract connections between brain regions. Although the modularity they commonly show has been extensively studied, it is unclear whether connection specificity of such networks can already be fully explained by the modularity alone. To answer this question, we study two networks, the neuronal network of C. elegans and the fibre tract network of human brains yielded through diffusion spectrum imaging (DSI). We compare them to their respective benchmark networks with varying modularities, which are generated by link swapping to have desired modularity values but otherwise maximally random. We find several network properties that are specific to the neural networks and cannot be fully explained by the modularity alone. First, the clustering coefficient and the characteristic path length of C. elegans and human connectomes are both higher than those of the benchmark networks with similar modularity. High clustering coefficient indicates efficient local information distribution and high characteristic path length suggests reduced global integration. Second, the total wiring length is smaller than for the alternative configurations with similar modularity. This is due to lower dispersion of connections, which means each neuron in C. elegans connectome or each region of interest (ROI) in human connectome reaches fewer ganglia or cortical areas, respectively. Third, both neural networks show lower algorithmic entropy compared to the alternative arrangements. This implies that fewer rules are needed to encode for the organisation of neural systems

    Resolving Structure in Human Brain Organization: Identifying Mesoscale Organization in Weighted Network Representations

    Full text link
    Human brain anatomy and function display a combination of modular and hierarchical organization, suggesting the importance of both cohesive structures and variable resolutions in the facilitation of healthy cognitive processes. However, tools to simultaneously probe these features of brain architecture require further development. We propose and apply a set of methods to extract cohesive structures in network representations of brain connectivity using multi-resolution techniques. We employ a combination of soft thresholding, windowed thresholding, and resolution in community detection, that enable us to identify and isolate structures associated with different weights. One such mesoscale structure is bipartivity, which quantifies the extent to which the brain is divided into two partitions with high connectivity between partitions and low connectivity within partitions. A second, complementary mesoscale structure is modularity, which quantifies the extent to which the brain is divided into multiple communities with strong connectivity within each community and weak connectivity between communities. Our methods lead to multi-resolution curves of these network diagnostics over a range of spatial, geometric, and structural scales. For statistical comparison, we contrast our results with those obtained for several benchmark null models. Our work demonstrates that multi-resolution diagnostic curves capture complex organizational profiles in weighted graphs. We apply these methods to the identification of resolution-specific characteristics of healthy weighted graph architecture and altered connectivity profiles in psychiatric disease.Comment: Comments welcom

    Born to learn: The inspiration, progress, and future of evolved plastic artificial neural networks

    Get PDF
    Biological plastic neural networks are systems of extraordinary computational capabilities shaped by evolution, development, and lifetime learning. The interplay of these elements leads to the emergence of adaptive behavior and intelligence. Inspired by such intricate natural phenomena, Evolved Plastic Artificial Neural Networks (EPANNs) use simulated evolution in-silico to breed plastic neural networks with a large variety of dynamics, architectures, and plasticity rules: these artificial systems are composed of inputs, outputs, and plastic components that change in response to experiences in an environment. These systems may autonomously discover novel adaptive algorithms, and lead to hypotheses on the emergence of biological adaptation. EPANNs have seen considerable progress over the last two decades. Current scientific and technological advances in artificial neural networks are now setting the conditions for radically new approaches and results. In particular, the limitations of hand-designed networks could be overcome by more flexible and innovative solutions. This paper brings together a variety of inspiring ideas that define the field of EPANNs. The main methods and results are reviewed. Finally, new opportunities and developments are presented
    • …
    corecore