2,782 research outputs found
Development of modularity in the neural activity of children's brains
We study how modularity of the human brain changes as children develop into
adults. Theory suggests that modularity can enhance the response function of a
networked system subject to changing external stimuli. Thus, greater cognitive
performance might be achieved for more modular neural activity, and modularity
might likely increase as children develop. The value of modularity calculated
from fMRI data is observed to increase during childhood development and peak in
young adulthood. Head motion is deconvolved from the fMRI data, and it is shown
that the dependence of modularity on age is independent of the magnitude of
head motion. A model is presented to illustrate how modularity can provide
greater cognitive performance at short times, i.e.\ task switching. A fitness
function is extracted from the model. Quasispecies theory is used to predict
how the average modularity evolves with age, illustrating the increase of
modularity during development from children to adults that arises from
selection for rapid cognitive function in young adults. Experiments exploring
the effect of modularity on cognitive performance are suggested. Modularity may
be a potential biomarker for injury, rehabilitation, or disease.Comment: 29 pages, 11 figure
Perspective: network-guided pattern formation of neural dynamics
The understanding of neural activity patterns is fundamentally linked to an
understanding of how the brain's network architecture shapes dynamical
processes. Established approaches rely mostly on deviations of a given network
from certain classes of random graphs. Hypotheses about the supposed role of
prominent topological features (for instance, the roles of modularity, network
motifs, or hierarchical network organization) are derived from these
deviations. An alternative strategy could be to study deviations of network
architectures from regular graphs (rings, lattices) and consider the
implications of such deviations for self-organized dynamic patterns on the
network. Following this strategy, we draw on the theory of spatiotemporal
pattern formation and propose a novel perspective for analyzing dynamics on
networks, by evaluating how the self-organized dynamics are confined by network
architecture to a small set of permissible collective states. In particular, we
discuss the role of prominent topological features of brain connectivity, such
as hubs, modules and hierarchy, in shaping activity patterns. We illustrate the
notion of network-guided pattern formation with numerical simulations and
outline how it can facilitate the understanding of neural dynamics
Optimisation in âSelf-modellingâ Complex Adaptive Systems
When a dynamical system with multiple point attractors is released from an arbitrary initial condition it will relax into a configuration that locally resolves the constraints or opposing forces between interdependent state variables. However, when there are many conflicting interdependencies between variables, finding a configuration that globally optimises these constraints by this method is unlikely, or may take many attempts. Here we show that a simple distributed mechanism can incrementally alter a dynamical system such that it finds lower energy configurations, more reliably and more quickly. Specifically, when Hebbian learning is applied to the connections of a simple dynamical system undergoing repeated relaxation, the system will develop an associative memory that amplifies a subset of its own attractor states. This modifies the dynamics of the system such that its ability to find configurations that minimise total system energy, and globally resolve conflicts between interdependent variables, is enhanced. Moreover, we show that the system is not merely ârecallingâ low energy states that have been previously visited but âpredictingâ their location by generalising over local attractor states that have already been visited. This âself-modellingâ framework, i.e. a system that augments its behaviour with an associative memory of its own attractors, helps us better-understand the conditions under which a simple locally-mediated mechanism of self-organisation can promote significantly enhanced global resolution of conflicts between the components of a complex adaptive system. We illustrate this process in random and modular network constraint problems equivalent to graph colouring and distributed task allocation problems
Improving Scalability of Evolutionary Robotics with Reformulation
Creating systems that can operate autonomously in complex environments is a challenge for contemporary engineering techniques. Automatic design methods offer a promising alternative, but so far they have not been able to produce agents that outperform manual designs. One such method is evolutionary robotics. It has been shown to be a robust and versatile tool for designing robots to perform simple tasks, but more challenging tasks at present remain out of reach of the method.
In this thesis I discuss and attack some problems underlying the scalability issues associated with the method. I present a new technique for evolving modular networks. I show that the performance of modularity-biased evolution depends heavily on the morphology of the robotâs body and present a new method for co-evolving morphology and modular control.
To be able to reason about the new technique I develop reformulation framework: a general way to describe and reason about metaoptimization approaches. Within this framework I describe a new heuristic for developing metaoptimization approaches that is based on the technique for co-evolving morphology and modularity. I validate the framework by applying it to a practical task of zero-g autonomous assembly of structures with a fleet of small robots.
Although this work focuses on the evolutionary robotics, methods and approaches developed within it can be applied to optimization problems in any domain
Bifurcations and synchronization using an integrated programmable chaotic circuit
This paper presents a CMOS chip which can act as an autonomous stand-alone unit to generate different real-time chaotic behaviors by changing a few external bias currents. In particular, by changing one of these bias currents, the chip provides different examples of a period-doubling route to chaos. We present experimental orbits and attractors, time waveforms and power spectra measured from the chip. By using two chip units, experiments on synchronization can be carried out as well in real-time. Measurements are presented for the following synchronization schemes: linear coupling, drive-response and inverse system. Experimental statistical characterizations associated to these schemes are also presented. We also outline the possible use of the chip for chaotic encryption of audio signals. Finally, for completeness, the paper includes also a brief description of the chip design procedure and its internal circuitry
Effective Task Transfer Through Indirect Encoding
An important goal for machine learning is to transfer knowledge between tasks. For example, learning to play RoboCup Keepaway should contribute to learning the full game of RoboCup soccer. Often approaches to task transfer focus on transforming the original representation to fit the new task. Such representational transformations are necessary because the target task often requires new state information that was not included in the original representation. In RoboCup Keepaway, changing from the 3 vs. 2 variant of the task to 4 vs. 3 adds state information for each of the new players. In contrast, this dissertation explores the idea that transfer is most effective if the representation is designed to be the same even across different tasks. To this end, (1) the birdâs eye view (BEV) representation is introduced, which can represent different tasks on the same two-dimensional map. Because the BEV represents state information associated with positions instead of objects, it can be scaled to more objects without manipulation. In this way, both the 3 vs. 2 and 4 vs. 3 Keepaway tasks can be represented on the same BEV, which is (2) demonstrated in this dissertation. Yet a challenge for such representation is that a raw two-dimensional map is highdimensional and unstructured. This dissertation demonstrates how this problem is addressed naturally by the Hypercube-based NeuroEvolution of Augmenting Topologies (HyperNEAT) approach. HyperNEAT evolves an indirect encoding, which compresses the representation by exploiting its geometry. The dissertation then explores further exploiting the power of such encoding, beginning by (3) enhancing the configuration of the BEV with a focus on iii modularity. The need for further nonlinearity is then (4) investigated through the addition of hidden nodes. Furthermore, (5) the size of the BEV can be manipulated because it is indirectly encoded. Thus the resolution of the BEV, which is dictated by its size, is increased in precision and culminates in a HyperNEAT extension that is expressed at effectively infinite resolution. Additionally, scaling to higher resolutions through gradually increasing the size of the BEV is explored. Finally, (6) the ambitious problem of scaling from the Keepaway task to the Half-field Offense task is investigated with the BEV. Overall, this dissertation demonstrates that advanced representations in conjunction with indirect encoding can contribute to scaling learning techniques to more challenging tasks, such as the Half-field Offense RoboCup soccer domain
A mechanistic model of connector hubs, modularity, and cognition
The human brain network is modular--comprised of communities of tightly
interconnected nodes. This network contains local hubs, which have many
connections within their own communities, and connector hubs, which have
connections diversely distributed across communities. A mechanistic
understanding of these hubs and how they support cognition has not been
demonstrated. Here, we leveraged individual differences in hub connectivity and
cognition. We show that a model of hub connectivity accurately predicts the
cognitive performance of 476 individuals in four distinct tasks. Moreover,
there is a general optimal network structure for cognitive
performance--individuals with diversely connected hubs and consequent modular
brain networks exhibit increased cognitive performance, regardless of the task.
Critically, we find evidence consistent with a mechanistic model in which
connector hubs tune the connectivity of their neighbors to be more modular
while allowing for task appropriate information integration across communities,
which increases global modularity and cognitive performance
A unified framework for machine learning collective variables for enhanced sampling simulations:
Identifying a reduced set of collective variables is critical for
understanding atomistic simulations and accelerating them through enhanced
sampling techniques. Recently, several methods have been proposed to learn
these variables directly from atomistic data. Depending on the type of data
available, the learning process can be framed as dimensionality reduction,
classification of metastable states or identification of slow modes. Here we
present , a Python library that simplifies the construction
of these variables and their use in the context of enhanced sampling through a
contributed interface to the PLUMED software. The library is organized
modularly to facilitate the extension and cross-contamination of these
methodologies. In this spirit, we developed a general multi-task learning
framework in which multiple objective functions and data from different
simulations can be combined to improve the collective variables. The library's
versatility is demonstrated through simple examples that are prototypical of
realistic scenarios
- âŠ