371 research outputs found
Efficient Physical Embedding of Topologically Complex Information Processing Networks in Brains and Computer Circuits
Nervous systems are information processing networks that evolved by natural selection, whereas very large scale integrated (VLSI) computer circuits have evolved by commercially driven technology development. Here we follow historic intuition that all physical information processing systems will share key organizational properties, such as modularity, that generally confer adaptivity of function. It has long been observed that modular VLSI circuits demonstrate an isometric scaling relationship between the number of processing elements and the number of connections, known as Rent's rule, which is related to the dimensionality of the circuit's interconnect topology and its logical capacity. We show that human brain structural networks, and the nervous system of the nematode C. elegans, also obey Rent's rule, and exhibit some degree of hierarchical modularity. We further show that the estimated Rent exponent of human brain networks, derived from MRI data, can explain the allometric scaling relations between gray and white matter volumes across a wide range of mammalian species, again suggesting that these principles of nervous system design are highly conserved. For each of these fractal modular networks, the dimensionality of the interconnect topology was greater than the 2 or 3 Euclidean dimensions of the space in which it was embedded. This relatively high complexity entailed extra cost in physical wiring: although all networks were economically or cost-efficiently wired they did not strictly minimize wiring costs. Artificial and biological information processing systems both may evolve to optimize a trade-off between physical cost and topological complexity, resulting in the emergence of homologous principles of economical, fractal and modular design across many different kinds of nervous and computational networks
Handwritten digit recognition by bio-inspired hierarchical networks
The human brain processes information showing learning and prediction
abilities but the underlying neuronal mechanisms still remain unknown.
Recently, many studies prove that neuronal networks are able of both
generalizations and associations of sensory inputs. In this paper, following a
set of neurophysiological evidences, we propose a learning framework with a
strong biological plausibility that mimics prominent functions of cortical
circuitries. We developed the Inductive Conceptual Network (ICN), that is a
hierarchical bio-inspired network, able to learn invariant patterns by
Variable-order Markov Models implemented in its nodes. The outputs of the
top-most node of ICN hierarchy, representing the highest input generalization,
allow for automatic classification of inputs. We found that the ICN clusterized
MNIST images with an error of 5.73% and USPS images with an error of 12.56%
The Wiring Economy Principle: Connectivity Determines Anatomy in the Human Brain
Minimization of the wiring cost of white matter fibers in the human brain appears to be an organizational principle. We investigate this aspect in the human brain using whole brain connectivity networks extracted from high resolution diffusion MRI data of 14 normal volunteers. We specifically address the question of whether brain anatomy determines its connectivity or vice versa. Unlike previous studies we use weighted networks, where connections between cortical nodes are real-valued rather than binary off-on connections. In one set of analyses we found that the connectivity structure of the brain has near optimal wiring cost compared to random networks with the same number of edges, degree distribution and edge weight distribution. A specifically designed minimization routine could not find cheaper wiring without significantly degrading network performance. In another set of analyses we kept the observed brain network topology and connectivity but allowed nodes to freely move on a 3D manifold topologically identical to the brain. An efficient minimization routine was written to find the lowest wiring cost configuration. We found that beginning from any random configuration, the nodes invariably arrange themselves in a configuration with a striking resemblance to the brain. This confirms the widely held but poorly tested claim that wiring economy is a driving principle of the brain. Intriguingly, our results also suggest that the brain mainly optimizes for the most desirable network connectivity, and the observed brain anatomy is merely a result of this optimization
Resolving structural variability in network models and the brain
Large-scale white matter pathways crisscrossing the cortex create a complex
pattern of connectivity that underlies human cognitive function. Generative
mechanisms for this architecture have been difficult to identify in part
because little is known about mechanistic drivers of structured networks. Here
we contrast network properties derived from diffusion spectrum imaging data of
the human brain with 13 synthetic network models chosen to probe the roles of
physical network embedding and temporal network growth. We characterize both
the empirical and synthetic networks using familiar diagnostics presented in
statistical form, as scatter plots and distributions, to reveal the full range
of variability of each measure across scales in the network. We focus on the
degree distribution, degree assortativity, hierarchy, topological Rentian
scaling, and topological fractal scaling---in addition to several summary
statistics, including the mean clustering coefficient, shortest path length,
and network diameter. The models are investigated in a progressive, branching
sequence, aimed at capturing different elements thought to be important in the
brain, and range from simple random and regular networks, to models that
incorporate specific growth rules and constraints. We find that synthetic
models that constrain the network nodes to be embedded in anatomical brain
regions tend to produce distributions that are similar to those extracted from
the brain. We also find that network models hardcoded to display one network
property do not in general also display a second, suggesting that multiple
neurobiological mechanisms might be at play in the development of human brain
network architecture. Together, the network models that we develop and employ
provide a potentially useful starting point for the statistical inference of
brain network structure from neuroimaging data.Comment: 24 pages, 11 figures, 1 table, supplementary material
Modular and Hierarchically Modular Organization of Brain Networks
Brain networks are increasingly understood as one of a large class of information processing systems that share important organizational principles in common, including the property of a modular community structure. A module is topologically defined as a subset of highly inter-connected nodes which are relatively sparsely connected to nodes in other modules. In brain networks, topological modules are often made up of anatomically neighboring and/or functionally related cortical regions, and inter-modular connections tend to be relatively long distance. Moreover, brain networks and many other complex systems demonstrate the property of hierarchical modularity, or modularity on several topological scales: within each module there will be a set of sub-modules, and within each sub-module a set of sub-sub-modules, etc. There are several general advantages to modular and hierarchically modular network organization, including greater robustness, adaptivity, and evolvability of network function. In this context, we review some of the mathematical concepts available for quantitative analysis of (hierarchical) modularity in brain networks and we summarize some of the recent work investigating modularity of structural and functional brain networks derived from analysis of human neuroimaging data
Disparate connectivity for structural and functional networks is revealed when physical location of the connected nodes is considered
Macroscopic brain networks have been widely described with the manifold of metrics available using graph theory. However, most analyses do not incorporate information about the physical position of network nodes. Here, we provide a multimodal macroscopic network characterization while considering the physical positions of nodes. To do so, we examined anatomical and functional macroscopic brain networks in a sample of twenty healthy subjects. Anatomical networks are obtained with a graph based tractography algorithm from diffusion-weighted magnetic resonance images (DW-MRI). Anatomical con- nections identified via DW-MRI provided probabilistic constraints for determining the connectedness of 90 dif- ferent brain areas. Functional networks are derived from temporal linear correlations between blood-oxygenation level-dependent signals derived from the same brain areas. Rentian Scaling analysis, a technique adapted from very- large-scale integration circuits analyses, shows that func- tional networks are more random and less optimized than the anatomical networks. We also provide a new metric that allows quantifying the global connectivity arrange- ments for both structural and functional networks. While the functional networks show a higher contribution of inter-hemispheric connections, the anatomical networks highest connections are identified in a dorsal?ventral arrangement. These results indicate that anatomical and functional networks present different connectivity organi- zations that can only be identified when the physical locations of the nodes are included in the analysis
The Non-Random Brain: Efficiency, Economy, and Complex Dynamics
Modern anatomical tracing and imaging techniques are beginning to reveal the structural anatomy of neural circuits at small and large scales in unprecedented detail. When examined with analytic tools from graph theory and network science, neural connectivity exhibits highly non-random features, including high clustering and short path length, as well as modules and highly central hub nodes. These characteristic topological features of neural connections shape non-random dynamic interactions that occur during spontaneous activity or in response to external stimulation. Disturbances of connectivity and thus of neural dynamics are thought to underlie a number of disease states of the brain, and some evidence suggests that degraded functional performance of brain networks may be the outcome of a process of randomization affecting their nodes and edges. This article provides a survey of the non-random structure of neural connectivity, primarily at the large scale of regions and pathways in the mammalian cerebral cortex. In addition, we will discuss how non-random connections can give rise to differentiated and complex patterns of dynamics and information flow. Finally, we will explore the idea that at least some disorders of the nervous system are associated with increased randomness of neural connections
The evolutionary origins of hierarchy
Hierarchical organization -- the recursive composition of sub-modules -- is
ubiquitous in biological networks, including neural, metabolic, ecological, and
genetic regulatory networks, and in human-made systems, such as large
organizations and the Internet. To date, most research on hierarchy in networks
has been limited to quantifying this property. However, an open, important
question in evolutionary biology is why hierarchical organization evolves in
the first place. It has recently been shown that modularity evolves because of
the presence of a cost for network connections. Here we investigate whether
such connection costs also tend to cause a hierarchical organization of such
modules. In computational simulations, we find that networks without a
connection cost do not evolve to be hierarchical, even when the task has a
hierarchical structure. However, with a connection cost, networks evolve to be
both modular and hierarchical, and these networks exhibit higher overall
performance and evolvability (i.e. faster adaptation to new environments).
Additional analyses confirm that hierarchy independently improves adaptability
after controlling for modularity. Overall, our results suggest that the same
force--the cost of connections--promotes the evolution of both hierarchy and
modularity, and that these properties are important drivers of network
performance and adaptability. In addition to shedding light on the emergence of
hierarchy across the many domains in which it appears, these findings will also
accelerate future research into evolving more complex, intelligent
computational brains in the fields of artificial intelligence and robotics.Comment: 32 page
- …