2,493 research outputs found
Balancing Selection Pressures, Multiple Objectives, and Neural Modularity to Coevolve Cooperative Agent Behavior
Previous research using evolutionary computation in Multi-Agent Systems
indicates that assigning fitness based on team vs.\ individual behavior has a
strong impact on the ability of evolved teams of artificial agents to exhibit
teamwork in challenging tasks. However, such research only made use of
single-objective evolution. In contrast, when a multiobjective evolutionary
algorithm is used, populations can be subject to individual-level objectives,
team-level objectives, or combinations of the two. This paper explores the
performance of cooperatively coevolved teams of agents controlled by artificial
neural networks subject to these types of objectives. Specifically, predator
agents are evolved to capture scripted prey agents in a torus-shaped grid
world. Because of the tension between individual and team behaviors, multiple
modes of behavior can be useful, and thus the effect of modular neural networks
is also explored. Results demonstrate that fitness rewarding individual
behavior is superior to fitness rewarding team behavior, despite being applied
to a cooperative task. However, the use of networks with multiple modules
allows predators to discover intelligent behavior, regardless of which type of
objectives are used
Individual-based artificial ecosystems for design and optimization
Individual-based modeling has gained popularity over the last decade, mainly due to the paradigm\u27s proven ability to address a variety of problems seen in many disciplines, including modeling complex systems from bottom-up, providing relationship between component level and system level parameters, and discovering the emergence of system-level behaviors from simple component level interactions. Availability of computational power to run simulation models with thousands to millions of agents is another driving force in the widespread adoption of individual-based modeling. This thesis proposes an individual-based modeling approach for solving engineering design and optimization problems using artificial ecosystems --Abstract, page iii
Evolution of swarming behavior is shaped by how predators attack
Animal grouping behaviors have been widely studied due to their implications
for understanding social intelligence, collective cognition, and potential
applications in engineering, artificial intelligence, and robotics. An
important biological aspect of these studies is discerning which selection
pressures favor the evolution of grouping behavior. In the past decade,
researchers have begun using evolutionary computation to study the evolutionary
effects of these selection pressures in predator-prey models. The selfish herd
hypothesis states that concentrated groups arise because prey selfishly attempt
to place their conspecifics between themselves and the predator, thus causing
an endless cycle of movement toward the center of the group. Using an
evolutionary model of a predator-prey system, we show that how predators attack
is critical to the evolution of the selfish herd. Following this discovery, we
show that density-dependent predation provides an abstraction of Hamilton's
original formulation of ``domains of danger.'' Finally, we verify that
density-dependent predation provides a sufficient selective advantage for prey
to evolve the selfish herd in response to predation by coevolving predators.
Thus, our work corroborates Hamilton's selfish herd hypothesis in a digital
evolutionary model, refines the assumptions of the selfish herd hypothesis, and
generalizes the domain of danger concept to density-dependent predation.Comment: 25 pages, 11 figures, 5 tables, including 2 Supplementary Figures.
Version to appear in "Artificial Life
An artificial life approach to evolutionary computation: from mobile cellular algorithms to artificial ecosystems
This thesis presents a new class of evolutionary algorithms called mobile cellular evolutionary algorithms (mcEAs). These algorithms are characterized by individuals moving around on a spatial population structure. As a primary objective, this thesis aims to show that by controlling the population density and mobility in mcEAs, it is possible to achieve much better control over the rate of convergence than what is already possible in existing cellular EAs. Using the observations and results from this investigation into selection pressure in mcEAs, a general architecture for developing agent-based evolutionary algorithms called Artificial Ecosystems (AES) is presented. A simple agent-based EA is developed within the scope of AES is presented with two individual-based bottom-up schemes to achieve dynamic population sizing. Experiments with a test suite of optimization problems show that both mcEAs and the agent-based EA produced results comparable to the best solutions found by cellular EAs --Abstract, page iii
Learning to Coordinate with Anyone
In open multi-agent environments, the agents may encounter unexpected
teammates. Classical multi-agent learning approaches train agents that can only
coordinate with seen teammates. Recent studies attempted to generate diverse
teammates to enhance the generalizable coordination ability, but were
restricted by pre-defined teammates. In this work, our aim is to train agents
with strong coordination ability by generating teammates that fully cover the
teammate policy space, so that agents can coordinate with any teammates. Since
the teammate policy space is too huge to be enumerated, we find only dissimilar
teammates that are incompatible with controllable agents, which highly reduces
the number of teammates that need to be trained with. However, it is hard to
determine the number of such incompatible teammates beforehand. We therefore
introduce a continual multi-agent learning process, in which the agent learns
to coordinate with different teammates until no more incompatible teammates can
be found. The above idea is implemented in the proposed Macop (Multi-agent
compatible policy learning) algorithm. We conduct experiments in 8 scenarios
from 4 environments that have distinct coordination patterns. Experiments show
that Macop generates training teammates with much lower compatibility than
previous methods. As a result, in all scenarios Macop achieves the best overall
coordination ability while never significantly worse than the baselines,
showing strong generalization ability
An improved marine predators algorithm tuned data-driven multiple-node hormone regulation neuroendocrine-PID controller for multi-input–multi-output gantry crane system
Conventionally, researchers have favored the model-based control scheme for controlling gantry crane systems. However, this method necessitates a substantial investment of time and resources in order to develop an accurate mathematical model of the complex crane system. Recognizing this challenge, the current paper introduces a novel data-driven control scheme that relies exclusively on input and output data. Undertaking a couple of modifications to the conventional marine predators algorithm (MPA), random average marine predators algorithm (RAMPA) with tunable adaptive coefficient to control the step size ( CF) has been proposed in this paper as an enhanced alternative towards fine-tuning data-driven multiple-node hormone regulation neuroendocrine-PID (MnHR-NEPID) controller parameters for the multi-input–multi-output (MIMO) gantry crane system. First modification involved a random average location calculation within the algorithm’s updating mechanism to solve the local optima issue. The second modification then introduced tunable CF that enhanced search capacity by enabling users’ resilience towards attaining an offsetting level of exploration and exploitation phases. Effectiveness of the proposed method is evaluated based on the convergence curve and statistical analysis of the fitness function, the total norms of error and input, Wilcoxon’s rank test, time response analysis, and robustness analysis under the influence of external disturbance. Comparative findings alongside other existing metaheuristic-based algorithms confirmed excellence of the proposed method through its superior performance against the conventional MPA, particle swarm optimization (PSO), grey wolf optimizer (GWO), moth-flame optimization (MFO), multi-verse optimizer (MVO), sine-cosine algorithm (SCA), salp-swarm algorithm (SSA), slime mould algorithm (SMA), flow direction algorithm (FDA), and the formally published adaptive safe experimentation dynamics (ASED)-based methods
Co-Evolutionary Multi-Agent System with Speciation and Resource Sharing Mechanisms
Niching techniques for evolutionary algorithms are used in order to locate basins of attraction of the local minima of multi-modal fitness functions. Co-evolutionary techniques are aimed at overcoming limited adaptive capabilities of evolutionary algorithms resulting from the loss of useful population the idea of niching co-evolutionary multi-agent system (NCoEMAS)is introduced. In such a system the species formation phenomena occurs within one of the pre-existing species as a result of co-evolutionary interactions. The results of experiments with Rastrigin and Schwefel multi-modal test functions aimed at the comparison of NCoEMAS to other niching techniques are presented. Also, the resource sharing mechanism's parameters on the quality of speciation processes inNCoEMAS are investigated
- …