5,055 research outputs found
Variety in evolutionary strategies favours biodiversity in habitats of moderate productivity
The mechanism whereby biodiversity varies between habitats differing in productivity is a missing link between ecological and evolutionary theory with vital implications for biodiversity conservation, management and the assessment of ecosystem services. A unimodal, humped-back relationship, with biodiversity greatest at intermediate productivities, is evident when plant, animal and microbial communities are compared across productivities in nature. However, the mechanistic, evolutionary basis of this observation remains enigmatic. We show, for natural and semi-natural plant communities across a range of bioclimatic zones, that biodiversity is greatest where communities include species with widely divergent values for phenotypic traits involved in resource economics and reproductive timing, coinciding with intermediate biomass production, whilst each productivity extreme is associated with small numbers of specialised species with similar trait values. Our data demonstrate that evolution can generate a greater range of phenotypes where large, fast-growing species are prevented from attaining dominance and extreme adaptation to a harsh abiotic environment is not a prerequisite for survival
Evolutionary Strategies for Data Mining
Learning classifier systems (LCS) have been successful in generating rules for solving classification problems in data mining. The rules are of the form IF condition THEN action. The condition encodes the features of the input space and the action encodes the class label. What is lacking in those systems is the ability to express each feature using a function that is appropriate for that feature. The genetic algorithm is capable of doing this but cannot because only one type of membership function is provided. Thus, the genetic algorithm learns only the shape and placement of the membership function, and in some cases, the number of partitions generated by this function. The research conducted in this study employs a learning classifier system to generate the rules for solving classification problems, but also incorporates multiple types of membership functions, allowing the genetic algorithm to choose an appropriate one for each feature of the input space and determine the number of partitions generated by each function. In addition, three membership functions were introduced. This paper describes the framework and implementation of this modified learning classifier system (M-LCS). Using the M-LCS model, classifiers were simulated for two benchmark classification problems and two additional real-world problems. The results of these four simulations indicate that the M-LCS model provides an alternative approach to designing a learning classifier system. The following contributions are made to the field of computing: 1) a framework for developing a learning classifier system that employs multiple types of membership functions, 2) a model, M-LCS, that was developed from the framework, and 3) the addition of three membership functions that have not been used in the design of learning classifier systems
Contextual covariance matrix adaptation evolutionary strategies
Many stochastic search algorithms are designed to optimize a fixed objective function to learn a task, i.e., if the objective function changes slightly, for example, due to a change in the situation or context of the task, relearning is required to adapt to the new context. For instance, if we want to learn a kicking movement for a soccer robot, we have to relearn the movement for different ball locations. Such relearning is undesired as it is highly inefficient and many applications require a fast adaptation to a new context/situation. Therefore, we investigate contextual stochastic search algorithms that can learn multiple, similar tasks simultaneously. Current contextual stochastic search methods are based on policy search algorithms and suffer from premature convergence and the need for parameter tuning. In this paper, we extend the well known CMA-ES algorithm to the contextual setting and illustrate its performance on several contextual tasks. Our new algorithm, called contextual CMAES, leverages from contextual learning while it preserves all the features of standard CMA-ES such as stability, avoidance of premature convergence, step size control and a minimal amount of parameter tuning.This research was funded by European Union’s FP7 un-
der EuRoC grant agreement CP-IP 608849 and LIACC
(UID/CEC/00027/2015) and IEETA (UID/CEC/00127/2015)
and also partially was funded by PARC.info:eu-repo/semantics/publishedVersio
Evolutionary strategies in swarm robotics controllers
Nowadays, Unmanned Vehicles (UV) are widespread around the world. Most of these
vehicles require a great level of human control, and mission success is reliant on this
dependency. Therefore, it is important to use machine learning techniques that will train the
robotic controllers to automate the control, making the process more efficient.
Evolutionary strategies may be the key to having robust and adaptive learning in robotic
systems. Many studies involving UV systems and evolutionary strategies have been
conducted in the last years, however, there are still research gaps that need to be addressed,
such as the reality gap. The reality gap occurs when controllers trained in simulated
environments fail to be transferred to real robots.
This work proposes an approach for solving robotic tasks using realistic simulation and
using evolutionary strategies to train controllers. The chosen setup is easily scalable for multirobot
systems or swarm robots.
In this thesis, the simulation architecture and setup are presented, including the drone
simulation model and software. The drone model chosen for the simulations is available in the
real world and widely used, such as the software and flight control unit. This relevant factor
makes the transition to reality smoother and easier. Controllers using behavior trees were
evolved using a developed evolutionary algorithm, and several experiments were conducted.
Results demonstrated that it is possible to evolve a robotic controller in realistic
simulation environments, using a simulated drone model that exists in the real world, and also
the same flight control unit and operating system that is generally used in real world
experiments.Atualmente os Veículos Não Tripulados (VNT) encontram-se difundidos por todo o Mundo.
A maioria destes veículos requerem um elevado controlo humano, e o sucesso das missões
está diretamente dependente deste fator. Assim, é importante utilizar técnicas de
aprendizagem automática que irão treinar os controladores dos VNT, de modo a automatizar o
controlo, tornando o processo mais eficiente.
As estratégias evolutivas podem ser a chave para uma aprendizagem robusta e adaptativa
em sistemas robóticos. Vários estudos têm sido realizados nos últimos anos, contudo, existem
lacunas que precisam de ser abordadas, tais como o reality gap. Este facto ocorre quando os
controladores treinados em ambientes simulados falham ao serem transferidos para VNT
reais.
Este trabalho propõe uma abordagem para a resolução de missões com VNT, utilizando
um simulador realista e estratégias evolutivas para treinar controladores. A arquitetura
escolhida é facilmente escalável para sistemas com múltiplos VNT.
Nesta tese, é apresentada a arquitetura e configuração do ambiente de simulação,
incluindo o modelo e software de simulação do VNT. O modelo de VNT escolhido para as
simulações é um modelo real e amplamente utilizado, assim como o software e a unidade de
controlo de voo. Este fator é relevante e torna a transição para a realidade mais suave. É
desenvolvido um algoritmo evolucionário para treinar um controlador, que utiliza behavior
trees, e realizados diversos testes.
Os resultados demonstram que é possível evoluir um controlador em ambientes de
simulação realistas, utilizando um VNT simulado mas real, assim como utilizando as mesmas
unidades de controlo de voo e software que são amplamente utilizados em ambiente real
- …