396 research outputs found

    Novelty-driven cooperative coevolution

    Get PDF
    Cooperative coevolutionary algorithms (CCEAs) rely on multiple coevolving populations for the evolution of solutions composed of coadapted components. CCEAs enable, for instance, the evolution of cooperative multiagent systems composed of heterogeneous agents, where each agent is modelled as a component of the solution. Previous works have, however, shown that CCEAs are biased toward stability: the evolutionary process tends to converge prematurely to stable states instead of (near-)optimal solutions. In this study, we show how novelty search can be used to avoid the counterproductive attraction to stable states in coevolution. Novelty search is an evolutionary technique that drives evolution toward behavioural novelty and diversity rather than exclusively pursuing a static objective. We evaluate three novelty-based approaches that rely on, respectively (1) the novelty of the team as a whole, (2) the novelty of the agents’ individual behaviour, and (3) the combination of the two. We compare the proposed approaches with traditional fitness-driven cooperative coevolution in three simulated multirobot tasks. Our results show that team-level novelty scoring is the most effective approach, significantly outperforming fitness-driven coevolution at multiple levels. Novelty-driven cooperative coevolution can substantially increase the potential of CCEAs while maintaining a computational complexity that scales well with the number of populations.info:eu-repo/semantics/publishedVersio

    Novel approaches to cooperative coevolution of heterogeneous multiagent systems

    Get PDF
    Tese de doutoramento, Informática (Engenharia Informática), Universidade de Lisboa, Faculdade de Ciências, 2017Heterogeneous multirobot systems are characterised by the morphological and/or behavioural heterogeneity of their constituent robots. These systems have a number of advantages over the more common homogeneous multirobot systems: they can leverage specialisation for increased efficiency, and they can solve tasks that are beyond the reach of any single type of robot, by combining the capabilities of different robots. Manually designing control for heterogeneous systems is a challenging endeavour, since the desired system behaviour has to be decomposed into behavioural rules for the individual robots, in such a way that the team as a whole cooperates and takes advantage of specialisation. Evolutionary robotics is a promising alternative that can be used to automate the synthesis of controllers for multirobot systems, but so far, research in the field has been mostly focused on homogeneous systems, such as swarm robotics systems. Cooperative coevolutionary algorithms (CCEAs) are a type of evolutionary algorithm that facilitate the evolution of control for heterogeneous systems, by working over a decomposition of the problem. In a typical CCEA application, each agent evolves in a separate population, with the evaluation of each agent depending on the cooperation with agents from the other coevolving populations. A CCEA is thus capable of projecting the large search space into multiple smaller, and more manageable, search spaces. Unfortunately, the use of cooperative coevolutionary algorithms is associated with a number of challenges. Previous works have shown that CCEAs are not necessarily attracted to the global optimum, but often converge to mediocre stable states; they can be inefficient when applied to large teams; and they have not yet been demonstrated in real robotic systems, nor in morphologically heterogeneous multirobot systems. In this thesis, we propose novel methods for overcoming the fundamental challenges in cooperative coevolutionary algorithms mentioned above, and study them in multirobot domains: we propose novelty-driven cooperative coevolution, in which premature convergence is avoided by encouraging behavioural novelty; and we propose Hyb-CCEA, an extension of CCEAs that places the team heterogeneity under evolutionary control, significantly improving its scalability with respect to the team size. These two approaches have in common that they take into account the exploration of the behaviour space by the evolutionary process. Besides relying on the fitness function for the evaluation of the candidate solutions, the evolutionary process analyses the behaviour of the evolving agents to improve the effectiveness of the evolutionary search. The ultimate goal of our research is to achieve general methods that can effectively synthesise controllers for heterogeneous multirobot systems, and therefore help to realise the full potential of this type of systems. To this end, we demonstrate the proposed approaches in a variety of multirobot domains used in previous works, and we study the application of CCEAs to new robotics domains, including a morphological heterogeneous system and a real robotic system.Fundação para a Ciência e a Tecnologia (FCT, PEst-OE/EEI/LA0008/2011

    Multiagent Learning Through Indirect Encoding

    Get PDF
    Designing a system of multiple, heterogeneous agents that cooperate to achieve a common goal is a difficult task, but it is also a common real-world problem. Multiagent learning addresses this problem by training the team to cooperate through a learning algorithm. However, most traditional approaches treat multiagent learning as a combination of multiple single-agent learning problems. This perspective leads to many inefficiencies in learning such as the problem of reinvention, whereby fundamental skills and policies that all agents should possess must be rediscovered independently for each team member. For example, in soccer, all the players know how to pass and kick the ball, but a traditional algorithm has no way to share such vital information because it has no way to relate the policies of agents to each other. In this dissertation a new approach to multiagent learning that seeks to address these issues is presented. This approach, called multiagent HyperNEAT, represents teams as a pattern of policies rather than individual agents. The main idea is that an agent’s location within a canonical team layout (such as a soccer team at the start of a game) tends to dictate its role within that team, called the policy geometry. For example, as soccer positions move from goal to center they become more offensive and less defensive, a concept that is compactly represented as a pattern. iii The first major contribution of this dissertation is a new method for evolving neural network controllers called HyperNEAT, which forms the foundation of the second contribution and primary focus of this work, multiagent HyperNEAT. Multiagent learning in this dissertation is investigated in predator-prey, room-clearing, and patrol domains, providing a real-world context for the approach. Interestingly, because the teams in multiagent HyperNEAT are represented as patterns they can scale up to an infinite number of multiagent policies that can be sampled from the policy geometry as needed. Thus the third contribution is a method for teams trained with multiagent HyperNEAT to dynamically scale their size without further learning. Fourth, the capabilities to both learn and scale in multiagent HyperNEAT are compared to the traditional multiagent SARSA(λ) approach in a comprehensive study. The fifth contribution is a method for efficiently learning and encoding multiple policies for each agent on a team to facilitate learning in multi-task domains. Finally, because there is significant interest in practical applications of multiagent learning, multiagent HyperNEAT is tested in a real-world military patrolling application with actual Khepera III robots. The ultimate goal is to provide a new perspective on multiagent learning and to demonstrate the practical benefits of training heterogeneous, scalable multiagent teams through generative encoding

    USING COEVOLUTION IN COMPLEX DOMAINS

    Get PDF
    Genetic Algorithms is a computational model inspired by Darwin's theory of evolution. It has a broad range of applications from function optimization to solving robotic control problems. Coevolution is an extension of Genetic Algorithms in which more than one population is evolved at the same time. Coevolution can be done in two ways: cooperatively, in which populations jointly try to solve an evolutionary problem, or competitively. Coevolution has been shown to be useful in solving many problems, yet its application in complex domains still needs to be demonstrated.Robotic soccer is a complex domain that has a dynamic and noisy environment. Many Reinforcement Learning techniques have been applied to the robotic soccer domain, since it is a great test bed for many machine learning methods. However, the success of Reinforcement Learning methods has been limited due to the huge state space of the domain. Evolutionary Algorithms have also been used to tackle this domain; nevertheless, their application has been limited to a small subset of the domain, and no attempt has been shown to be successful in acting on solving the whole problem.This thesis will try to answer the question of whether coevolution can be applied successfully to complex domains. Three techniques are introduced to tackle the robotic soccer problem. First, an incremental learning algorithm is used to achieve a desirable performance of some soccer tasks. Second, a hierarchical coevolution paradigm is introduced to allow coevolution to scale up in solving the problem. Third, an orchestration mechanism is utilized to manage the learning processes

    Open-ended Search through Minimal Criterion Coevolution

    Get PDF
    Search processes guided by objectives are ubiquitous in machine learning. They iteratively reward artifacts based on their proximity to an optimization target, and terminate upon solution space convergence. Some recent studies take a different approach, capitalizing on the disconnect between mainstream methods in artificial intelligence and the field\u27s biological inspirations. Natural evolution has an unparalleled propensity for generating well-adapted artifacts, but these artifacts are decidedly non-convergent. This new class of non-objective algorithms induce a divergent search by rewarding solutions according to their novelty with respect to prior discoveries. While the diversity of resulting innovations exhibit marked parallels to natural evolution, the methods by which search is driven remain unnatural. In particular, nature has no need to characterize and enforce novelty; rather, it is guided by a single, simple constraint: survive long enough to reproduce. The key insight is that such a constraint, called the minimal criterion, can be harnessed in a coevolutionary context where two populations interact, finding novel ways to satisfy their reproductive constraint with respect to each other. Among the contributions of this dissertation, this approach, called minimal criterion coevolution (MCC), is the primary (1). MCC is initially demonstrated in a maze domain (2) where it evolves increasingly complex mazes and solutions. An enhancement to the initial domain (3) is then introduced, allowing mazes to expand unboundedly and validating MCC\u27s propensity for open-ended discovery. A more natural method of diversity preservation through resource limitation (4) is introduced and shown to maintain population diversity without comparing genetic distance. Finally, MCC is demonstrated in an evolutionary robotics domain (5) where it coevolves increasingly complex bodies with brain controllers to achieve principled locomotion. The overall benefit of these contributions is a novel, general, algorithmic framework for the continual production of open-ended dynamics without the need for a characterization of behavioral novelty

    Learning Collaborative Foraging in a Swarm of Robots using Embodied Evolution

    Get PDF
    International audienceIn this paper, we study how a swarm of robots adapts over time to solve a collaborative task using a distributed Embodied Evolutionary approach , where each robot runs an evolutionary algorithm and they locally exchange genomes and fitness values. Particularly, we study a collabo-rative foraging task, where the robots are rewarded for collecting food items that are too heavy to be collected individually and need at least two robots to be collected. Further, the robots also need to display a signal matching the color of the item with an additional effector. Our experiments show that the distributed algorithm is able to evolve swarm behavior to collect items cooperatively. The experiments also reveal that effective cooperation is evolved due mostly to the ability of robots to jointly reach food items, while learning to display the right color that matches the item is done suboptimally. However, a closer analysis shows that, without a mechanism to avoid neglecting any kind of item, robots collect all of them, which means that there is some degree of learning to choose the right value for the color effector depending on the situation

    Evolving team compositions by agent swapping

    Get PDF
    Optimizing collective behavior in multiagent systems requires algorithms to find not only appropriate individual behaviors but also a suitable composition of agents within a team. Over the last two decades, evolutionary methods have emerged as a promising approach for the design of agents and their compositions into teams. The choice of a crossover operator that facilitates the evolution of optimal team composition is recognized to be crucial, but so far, it has never been thoroughly quantified. Here, we highlight the limitations of two different crossover operators that exchange entire agents between teams: restricted agent swapping (RAS) that exchanges only corresponding agents between teams and free agent swapping (FAS) that allows an arbitrary exchange of agents. Our results show that RAS suffers from premature convergence, whereas FAS entails insufficient convergence. Consequently, in both cases, the exploration and exploitation aspects of the evolutionary algorithm are not well balanced resulting in the evolution of suboptimal team compositions. To overcome this problem, we propose combining the two methods. Our approach first applies FAS to explore the search space and then RAS to exploit it. This mixed approach is a much more efficient strategy for the evolution of team compositions compared to either strategy on its own. Our results suggest that such a mixed agent-swapping algorithm should always be preferred whenever the optimal composition of individuals in a multiagent system is unknown

    A Comparative Evaluation of Methods for Evolving a Cooperative Team

    Get PDF
    corecore