10,505 research outputs found
Cell division and migration in a 'genotype' for neural networks
Much research has been dedicated recently to applying genetic algorithms to populations of
neural networks. However, while in real organisms the inherited genotype maps in complex
ways into the resulting phenotype, in most of this research the development process that
creates the individual phenotype is ignored. In this paper we present a model of neural
development which includes cell division and cell migration in addition to axonal growth and
branching. This reflects, in a very simplified way, what happens in the ontogeny of real
organisms. The development process of our artificial organisms shows successive phases of
functional differentiation and specialization. In addition, we find that mutations that affect
different phases of development have very different evolutionary consequences. A single
change in the early stages of cell division/migration can have huge effects on the phenotype
while changes in later stages have usually a less drammatic impact. Sometimes changes that
affect the first developental stages may be retained producing sudden changes in evolutionary
history
Duplication of modules facilitates the evolution of functional specialization
The evolution of simulated robots with three different architectures is studied. We compared a non-modular feed forward network, a hardwired modular and a duplication-based modular motor control network. We conclude that both modular architectures outperform the non-modular architecture, both in terms of rate of adaptation as well as the level of adaptation achieved. The main difference between the hardwired and duplication-based modular architectures is that in the latter the modules reached a much higher degree of functional specialization of their motor control units with regard to high level behavioral functions. The hardwired architectures reach the same level of performance, but have a more distributed assignment of functional tasks to the motor control units. We conclude that the mechanism through which functional specialization is achieved is similar to the mechanism proposed for the evolution of duplicated genes. It is found that the duplication of multifunctional modules first leads to a change in the regulation of the module, leading to a differentiation of the functional context in which the module is used. Then the module adapts to the new functional context. After this second step the system is locked into a functionally specialized state. We suggest that functional specialization may be an evolutionary absorption state
Balancing Selection Pressures, Multiple Objectives, and Neural Modularity to Coevolve Cooperative Agent Behavior
Previous research using evolutionary computation in Multi-Agent Systems
indicates that assigning fitness based on team vs.\ individual behavior has a
strong impact on the ability of evolved teams of artificial agents to exhibit
teamwork in challenging tasks. However, such research only made use of
single-objective evolution. In contrast, when a multiobjective evolutionary
algorithm is used, populations can be subject to individual-level objectives,
team-level objectives, or combinations of the two. This paper explores the
performance of cooperatively coevolved teams of agents controlled by artificial
neural networks subject to these types of objectives. Specifically, predator
agents are evolved to capture scripted prey agents in a torus-shaped grid
world. Because of the tension between individual and team behaviors, multiple
modes of behavior can be useful, and thus the effect of modular neural networks
is also explored. Results demonstrate that fitness rewarding individual
behavior is superior to fitness rewarding team behavior, despite being applied
to a cooperative task. However, the use of networks with multiple modules
allows predators to discover intelligent behavior, regardless of which type of
objectives are used
Genetic algorithms with DNN-based trainable crossover as an example of partial specialization of general search
Universal induction relies on some general search procedure that is doomed to
be inefficient. One possibility to achieve both generality and efficiency is to
specialize this procedure w.r.t. any given narrow task. However, complete
specialization that implies direct mapping from the task parameters to
solutions (discriminative models) without search is not always possible. In
this paper, partial specialization of general search is considered in the form
of genetic algorithms (GAs) with a specialized crossover operator. We perform a
feasibility study of this idea implementing such an operator in the form of a
deep feedforward neural network. GAs with trainable crossover operators are
compared with the result of complete specialization, which is also represented
as a deep neural network. Experimental results show that specialized GAs can be
more efficient than both general GAs and discriminative models.Comment: AGI 2017 procedding, The final publication is available at
link.springer.co
Evolutionary Neural Gas (ENG): A Model of Self Organizing Network from Input Categorization
Despite their claimed biological plausibility, most self organizing networks
have strict topological constraints and consequently they cannot take into
account a wide range of external stimuli. Furthermore their evolution is
conditioned by deterministic laws which often are not correlated with the
structural parameters and the global status of the network, as it should happen
in a real biological system. In nature the environmental inputs are noise
affected and fuzzy. Which thing sets the problem to investigate the possibility
of emergent behaviour in a not strictly constrained net and subjected to
different inputs. It is here presented a new model of Evolutionary Neural Gas
(ENG) with any topological constraints, trained by probabilistic laws depending
on the local distortion errors and the network dimension. The network is
considered as a population of nodes that coexist in an ecosystem sharing local
and global resources. Those particular features allow the network to quickly
adapt to the environment, according to its dimensions. The ENG model analysis
shows that the net evolves as a scale-free graph, and justifies in a deeply
physical sense- the term gas here used.Comment: 16 pages, 8 figure
Combating catastrophic forgetting with developmental compression
Generally intelligent agents exhibit successful behavior across problems in
several settings. Endemic in approaches to realize such intelligence in
machines is catastrophic forgetting: sequential learning corrupts knowledge
obtained earlier in the sequence, or tasks antagonistically compete for system
resources. Methods for obviating catastrophic forgetting have sought to
identify and preserve features of the system necessary to solve one problem
when learning to solve another, or to enforce modularity such that minimally
overlapping sub-functions contain task specific knowledge. While successful,
both approaches scale poorly because they require larger architectures as the
number of training instances grows, causing different parts of the system to
specialize for separate subsets of the data. Here we present a method for
addressing catastrophic forgetting called developmental compression. It
exploits the mild impacts of developmental mutations to lessen adverse changes
to previously-evolved capabilities and `compresses' specialized neural networks
into a generalized one. In the absence of domain knowledge, developmental
compression produces systems that avoid overt specialization, alleviating the
need to engineer a bespoke system for every task permutation and suggesting
better scalability than existing approaches. We validate this method on a robot
control problem and hope to extend this approach to other machine learning
domains in the future
Natural Variation and Neuromechanical Systems
Natural variation plays an important but subtle and often ignored role in neuromechanical systems. This is especially important when designing for living or hybrid systems \ud
which involve a biological or self-assembling component. Accounting for natural variation can be accomplished by taking a population phenomics approach to modeling and analyzing such systems. I will advocate the position that noise in neuromechanical systems is partially represented by natural variation inherent in user physiology. Furthermore, this noise can be augmentative in systems that couple physiological systems with technology. There are several tools and approaches that can be borrowed from computational biology to characterize the populations of users as they interact with the technology. In addition to transplanted approaches, the potential of natural variation can be understood as having a range of effects on both the individual's physiology and function of the living/hybrid system over time. Finally, accounting for natural variation can be put to good use in human-machine system design, as three prescriptions for exploiting variation in design are proposed
Brain architecture: A design for natural computation
Fifty years ago, John von Neumann compared the architecture of the brain with
that of computers that he invented and which is still in use today. In those
days, the organisation of computers was based on concepts of brain
organisation. Here, we give an update on current results on the global
organisation of neural systems. For neural systems, we outline how the spatial
and topological architecture of neuronal and cortical networks facilitates
robustness against failures, fast processing, and balanced network activation.
Finally, we discuss mechanisms of self-organization for such architectures.
After all, the organization of the brain might again inspire computer
architecture
- …