9,286 research outputs found
Forcing neurocontrollers to exploit sensory symmetry through hard-wired modularity in the game of Cellz
Several attempts have been made in the past to construct encoding schemes that allow modularity to emerge in evolving systems, but success is limited. We believe that in order to create successful and scalable encodings for emerging modularity, we first need to explore the benefits of different types of modularity by hard-wiring these into evolvable systems. In this paper we explore different ways of exploiting sensory symmetry inherent in the agent in the simple game Cellz by evolving symmetrically identical modules. It is concluded that significant increases in both speed of evolution and final fitness can be achieved relative to monolithic controllers. Furthermore, we show that a simple function approximation task that exhibits sensory symmetry can be used as a quick approximate measure of the utility of an encoding scheme for the more complex game-playing task
Consensus clustering in complex networks
The community structure of complex networks reveals both their organization
and hidden relationships among their constituents. Most community detection
methods currently available are not deterministic, and their results typically
depend on the specific random seeds, initial conditions and tie-break rules
adopted for their execution. Consensus clustering is used in data analysis to
generate stable results out of a set of partitions delivered by stochastic
methods. Here we show that consensus clustering can be combined with any
existing method in a self-consistent way, enhancing considerably both the
stability and the accuracy of the resulting partitions. This framework is also
particularly suitable to monitor the evolution of community structure in
temporal networks. An application of consensus clustering to a large citation
network of physics papers demonstrates its capability to keep track of the
birth, death and diversification of topics.Comment: 11 pages, 12 figures. Published in Scientific Report
Perspective: network-guided pattern formation of neural dynamics
The understanding of neural activity patterns is fundamentally linked to an
understanding of how the brain's network architecture shapes dynamical
processes. Established approaches rely mostly on deviations of a given network
from certain classes of random graphs. Hypotheses about the supposed role of
prominent topological features (for instance, the roles of modularity, network
motifs, or hierarchical network organization) are derived from these
deviations. An alternative strategy could be to study deviations of network
architectures from regular graphs (rings, lattices) and consider the
implications of such deviations for self-organized dynamic patterns on the
network. Following this strategy, we draw on the theory of spatiotemporal
pattern formation and propose a novel perspective for analyzing dynamics on
networks, by evaluating how the self-organized dynamics are confined by network
architecture to a small set of permissible collective states. In particular, we
discuss the role of prominent topological features of brain connectivity, such
as hubs, modules and hierarchy, in shaping activity patterns. We illustrate the
notion of network-guided pattern formation with numerical simulations and
outline how it can facilitate the understanding of neural dynamics
Dynamic reconfiguration of human brain networks during learning
Human learning is a complex phenomenon requiring flexibility to adapt
existing brain function and precision in selecting new neurophysiological
activities to drive desired behavior. These two attributes -- flexibility and
selection -- must operate over multiple temporal scales as performance of a
skill changes from being slow and challenging to being fast and automatic. Such
selective adaptability is naturally provided by modular structure, which plays
a critical role in evolution, development, and optimal network function. Using
functional connectivity measurements of brain activity acquired from initial
training through mastery of a simple motor skill, we explore the role of
modularity in human learning by identifying dynamic changes of modular
organization spanning multiple temporal scales. Our results indicate that
flexibility, which we measure by the allegiance of nodes to modules, in one
experimental session predicts the relative amount of learning in a future
session. We also develop a general statistical framework for the identification
of modular architectures in evolving systems, which is broadly applicable to
disciplines where network adaptability is crucial to the understanding of
system performance.Comment: Main Text: 19 pages, 4 figures Supplementary Materials: 34 pages, 4
figures, 3 table
Topological properties of hierarchical networks
Hierarchical networks are attracting a renewal interest for modelling the
organization of a number of biological systems and for tackling the complexity
of statistical mechanical models beyond mean-field limitations. Here we
consider the Dyson hierarchical construction for ferromagnets, neural networks
and spin-glasses, recently analyzed from a statistical-mechanics perspective,
and we focus on the topological properties of the underlying structures. In
particular, we find that such structures are weighted graphs that exhibit high
degree of clustering and of modularity, with small spectral gap; the robustness
of such features with respect to link removal is also studied. These outcomes
are then discussed and related to the statistical mechanics scenario in full
consistency. Lastly, we look at these weighted graphs as Markov chains and we
show that in the limit of infinite size, the emergence of ergodicity breakdown
for the stochastic process mirrors the emergence of meta-stabilities in the
corresponding statistical mechanical analysis
The evolutionary origins of hierarchy
Hierarchical organization -- the recursive composition of sub-modules -- is
ubiquitous in biological networks, including neural, metabolic, ecological, and
genetic regulatory networks, and in human-made systems, such as large
organizations and the Internet. To date, most research on hierarchy in networks
has been limited to quantifying this property. However, an open, important
question in evolutionary biology is why hierarchical organization evolves in
the first place. It has recently been shown that modularity evolves because of
the presence of a cost for network connections. Here we investigate whether
such connection costs also tend to cause a hierarchical organization of such
modules. In computational simulations, we find that networks without a
connection cost do not evolve to be hierarchical, even when the task has a
hierarchical structure. However, with a connection cost, networks evolve to be
both modular and hierarchical, and these networks exhibit higher overall
performance and evolvability (i.e. faster adaptation to new environments).
Additional analyses confirm that hierarchy independently improves adaptability
after controlling for modularity. Overall, our results suggest that the same
force--the cost of connections--promotes the evolution of both hierarchy and
modularity, and that these properties are important drivers of network
performance and adaptability. In addition to shedding light on the emergence of
hierarchy across the many domains in which it appears, these findings will also
accelerate future research into evolving more complex, intelligent
computational brains in the fields of artificial intelligence and robotics.Comment: 32 page
Robust short-term memory without synaptic learning
Short-term memory in the brain cannot in general be explained the way
long-term memory can -- as a gradual modification of synaptic weights -- since
it takes place too quickly. Theories based on some form of cellular
bistability, however, do not seem able to account for the fact that noisy
neurons can collectively store information in a robust manner. We show how a
sufficiently clustered network of simple model neurons can be instantly induced
into metastable states capable of retaining information for a short time (a few
seconds). The mechanism is robust to different network topologies and kinds of
neural model. This could constitute a viable means available to the brain for
sensory and/or short-term memory with no need of synaptic learning. Relevant
phenomena described by neurobiology and psychology, such as local
synchronization of synaptic inputs and power-law statistics of forgetting
avalanches, emerge naturally from this mechanism, and we suggest possible
experiments to test its viability in more biological settings.Comment: 20 pages, 9 figures. Amended to include section on spiking neurons,
with general rewrit
- …