9,904 research outputs found
A multi-population hybrid Genetic Programming System
Dissertation presented as the partial requirement for obtaining a Master's degree in Data Science and Advanced AnalyticsIn the last few years, geometric semantic genetic programming has incremented
its popularity, obtaining interesting results on several real life applications. Nevertheless,
the large size of the solutions generated by geometric semantic genetic
programming is still an issue, in particular for those applications in which reading
and interpreting the final solution is desirable. In this thesis, a new parallel
and distributed genetic programming system is introduced with the objective of
mitigating this drawback. The proposed system (called MPHGP, which stands for
Multi-Population Hybrid Genetic Programming) is composed by two types of subpopulations,
one of which runs geometric semantic genetic programming, while
the other runs a standard multi-objective genetic programming algorithm that optimizes,
at the same time, fitness and size of solutions. The two subpopulations
evolve independently and in parallel, exchanging individuals at prefixed synchronization
instants. The presented experimental results, obtained on five real-life
symbolic regression applications, suggest that MPHGP is able to find solutions
that are comparable, or even better, than the ones found by geometric semantic
genetic programming, both on training and on unseen testing data. At the same
time, MPHGP is also able to find solutions that are significantly smaller than the
ones found by geometric semantic genetic programming
Digital Ecosystems: Ecosystem-Oriented Architectures
We view Digital Ecosystems to be the digital counterparts of biological
ecosystems. Here, we are concerned with the creation of these Digital
Ecosystems, exploiting the self-organising properties of biological ecosystems
to evolve high-level software applications. Therefore, we created the Digital
Ecosystem, a novel optimisation technique inspired by biological ecosystems,
where the optimisation works at two levels: a first optimisation, migration of
agents which are distributed in a decentralised peer-to-peer network, operating
continuously in time; this process feeds a second optimisation based on
evolutionary computing that operates locally on single peers and is aimed at
finding solutions to satisfy locally relevant constraints. The Digital
Ecosystem was then measured experimentally through simulations, with measures
originating from theoretical ecology, evaluating its likeness to biological
ecosystems. This included its responsiveness to requests for applications from
the user base, as a measure of the ecological succession (ecosystem maturity).
Overall, we have advanced the understanding of Digital Ecosystems, creating
Ecosystem-Oriented Architectures where the word ecosystem is more than just a
metaphor.Comment: 39 pages, 26 figures, journa
Combating catastrophic forgetting with developmental compression
Generally intelligent agents exhibit successful behavior across problems in
several settings. Endemic in approaches to realize such intelligence in
machines is catastrophic forgetting: sequential learning corrupts knowledge
obtained earlier in the sequence, or tasks antagonistically compete for system
resources. Methods for obviating catastrophic forgetting have sought to
identify and preserve features of the system necessary to solve one problem
when learning to solve another, or to enforce modularity such that minimally
overlapping sub-functions contain task specific knowledge. While successful,
both approaches scale poorly because they require larger architectures as the
number of training instances grows, causing different parts of the system to
specialize for separate subsets of the data. Here we present a method for
addressing catastrophic forgetting called developmental compression. It
exploits the mild impacts of developmental mutations to lessen adverse changes
to previously-evolved capabilities and `compresses' specialized neural networks
into a generalized one. In the absence of domain knowledge, developmental
compression produces systems that avoid overt specialization, alleviating the
need to engineer a bespoke system for every task permutation and suggesting
better scalability than existing approaches. We validate this method on a robot
control problem and hope to extend this approach to other machine learning
domains in the future
- …