3,508 research outputs found

    Hierarchical Features of Large-Scale Cortical Connectivity

    Full text link
    The analysis of complex networks has revealed patterns of organization in a variety of natural and artificial systems, including neuronal networks of the brain at multiple scales. In this paper, we describe a novel analysis of the large-scale connectivity between regions of the mammalian cerebral cortex, utilizing a set of hierarchical measurements proposed recently. We examine previously identified functional clusters of brain regions in macaque visual cortex and cat cortex and find significant differences between such clusters in terms of several hierarchical measures, revealing differences in how these clusters are embedded in the overall cortical architecture. For example, the ventral cluster of visual cortex maintains structurally more segregated, less divergent connections than the dorsal cluster, which may point to functionally different roles of their constituent brain regions.Comment: 17 pages, 6 figure

    A topological approach to neural complexity

    Full text link
    Considerable efforts in modern statistical physics is devoted to the study of networked systems. One of the most important example of them is the brain, which creates and continuously develops complex networks of correlated dynamics. An important quantity which captures fundamental aspects of brain network organization is the neural complexity C(X)introduced by Tononi et al. This work addresses the dependence of this measure on the topological features of a network in the case of gaussian stationary process. Both anlytical and numerical results show that the degree of complexity has a clear and simple meaning from a topological point of view. Moreover the analytical result offers a straightforward algorithm to compute the complexity than the standard one.Comment: 6 pages, 4 figure

    Mechanisms of Zero-Lag Synchronization in Cortical Motifs

    Get PDF
    Zero-lag synchronization between distant cortical areas has been observed in a diversity of experimental data sets and between many different regions of the brain. Several computational mechanisms have been proposed to account for such isochronous synchronization in the presence of long conduction delays: Of these, the phenomenon of "dynamical relaying" - a mechanism that relies on a specific network motif - has proven to be the most robust with respect to parameter mismatch and system noise. Surprisingly, despite a contrary belief in the community, the common driving motif is an unreliable means of establishing zero-lag synchrony. Although dynamical relaying has been validated in empirical and computational studies, the deeper dynamical mechanisms and comparison to dynamics on other motifs is lacking. By systematically comparing synchronization on a variety of small motifs, we establish that the presence of a single reciprocally connected pair - a "resonance pair" - plays a crucial role in disambiguating those motifs that foster zero-lag synchrony in the presence of conduction delays (such as dynamical relaying) from those that do not (such as the common driving triad). Remarkably, minor structural changes to the common driving motif that incorporate a reciprocal pair recover robust zero-lag synchrony. The findings are observed in computational models of spiking neurons, populations of spiking neurons and neural mass models, and arise whether the oscillatory systems are periodic, chaotic, noise-free or driven by stochastic inputs. The influence of the resonance pair is also robust to parameter mismatch and asymmetrical time delays amongst the elements of the motif. We call this manner of facilitating zero-lag synchrony resonance-induced synchronization, outline the conditions for its occurrence, and propose that it may be a general mechanism to promote zero-lag synchrony in the brain.Comment: 41 pages, 12 figures, and 11 supplementary figure

    Zipf's Law Leads to Heaps' Law: Analyzing Their Relation in Finite-Size Systems

    Get PDF
    Background: Zipf's law and Heaps' law are observed in disparate complex systems. Of particular interests, these two laws often appear together. Many theoretical models and analyses are performed to understand their co-occurrence in real systems, but it still lacks a clear picture about their relation. Methodology/Principal Findings: We show that the Heaps' law can be considered as a derivative phenomenon if the system obeys the Zipf's law. Furthermore, we refine the known approximate solution of the Heaps' exponent provided the Zipf's exponent. We show that the approximate solution is indeed an asymptotic solution for infinite systems, while in the finite-size system the Heaps' exponent is sensitive to the system size. Extensive empirical analysis on tens of disparate systems demonstrates that our refined results can better capture the relation between the Zipf's and Heaps' exponents. Conclusions/Significance: The present analysis provides a clear picture about the relation between the Zipf's law and Heaps' law without the help of any specific stochastic model, namely the Heaps' law is indeed a derivative phenomenon from Zipf's law. The presented numerical method gives considerably better estimation of the Heaps' exponent given the Zipf's exponent and the system size. Our analysis provides some insights and implications of real complex systems, for example, one can naturally obtained a better explanation of the accelerated growth of scale-free networks.Comment: 15 pages, 6 figures, 1 Tabl

    Critical brain networks

    Full text link
    Highly correlated brain dynamics produces synchronized states with no behavioral value, while weakly correlated dynamics prevents information flow. We discuss the idea put forward by Per Bak that the working brain stays at an intermediate (critical) regime characterized by power-law correlations.Comment: Contribution to the Niels Bohr Summer Institute on Complexity and Criticality (2003); to appear in a Per Bak Memorial Issue of PHYSICA

    Fractal analyses of networks of integrate-and-fire stochastic spiking neurons

    Full text link
    Although there is increasing evidence of criticality in the brain, the processes that guide neuronal networks to reach or maintain criticality remain unclear. The present research examines the role of neuronal gain plasticity in time-series of simulated neuronal networks composed of integrate-and-fire stochastic spiking neurons, and the utility of fractal methods in assessing network criticality. Simulated time-series were derived from a network model of fully connected discrete-time stochastic excitable neurons. Monofractal and multifractal analyses were applied to neuronal gain time-series. Fractal scaling was greatest in networks with a mid-range of neuronal plasticity, versus extremely high or low levels of plasticity. Peak fractal scaling corresponded closely to additional indices of criticality, including average branching ratio. Networks exhibited multifractal structure, or multiple scaling relationships. Multifractal spectra around peak criticality exhibited elongated right tails, suggesting that the fractal structure is relatively insensitive to high-amplitude local fluctuations. Networks near critical states exhibited mid-range multifractal spectra width and tail length, which is consistent with literature suggesting that networks poised at quasi-critical states must be stable enough to maintain organization but unstable enough to be adaptable. Lastly, fractal analyses may offer additional information about critical state dynamics of networks by indicating scales of influence as networks approach critical states.Comment: 11 pages, 3 subfigures divided into 2 figure

    The Non-Random Brain: Efficiency, Economy, and Complex Dynamics

    Get PDF
    Modern anatomical tracing and imaging techniques are beginning to reveal the structural anatomy of neural circuits at small and large scales in unprecedented detail. When examined with analytic tools from graph theory and network science, neural connectivity exhibits highly non-random features, including high clustering and short path length, as well as modules and highly central hub nodes. These characteristic topological features of neural connections shape non-random dynamic interactions that occur during spontaneous activity or in response to external stimulation. Disturbances of connectivity and thus of neural dynamics are thought to underlie a number of disease states of the brain, and some evidence suggests that degraded functional performance of brain networks may be the outcome of a process of randomization affecting their nodes and edges. This article provides a survey of the non-random structure of neural connectivity, primarily at the large scale of regions and pathways in the mammalian cerebral cortex. In addition, we will discuss how non-random connections can give rise to differentiated and complex patterns of dynamics and information flow. Finally, we will explore the idea that at least some disorders of the nervous system are associated with increased randomness of neural connections

    Performance of networks of artificial neurons: The role of clustering

    Full text link
    The performance of the Hopfield neural network model is numerically studied on various complex networks, such as the Watts-Strogatz network, the Barab{\'a}si-Albert network, and the neuronal network of the C. elegans. Through the use of a systematic way of controlling the clustering coefficient, with the degree of each neuron kept unchanged, we find that the networks with the lower clustering exhibit much better performance. The results are discussed in the practical viewpoint of application, and the biological implications are also suggested.Comment: 4 pages, to appear in PRE as Rapid Com
    • …
    corecore