1,798 research outputs found

    Individual-based artificial ecosystems for design and optimization

    Get PDF
    Individual-based modeling has gained popularity over the last decade, mainly due to the paradigm\u27s proven ability to address a variety of problems seen in many disciplines, including modeling complex systems from bottom-up, providing relationship between component level and system level parameters, and discovering the emergence of system-level behaviors from simple component level interactions. Availability of computational power to run simulation models with thousands to millions of agents is another driving force in the widespread adoption of individual-based modeling. This thesis proposes an individual-based modeling approach for solving engineering design and optimization problems using artificial ecosystems --Abstract, page iii

    An artificial life approach to evolutionary computation: from mobile cellular algorithms to artificial ecosystems

    Get PDF
    This thesis presents a new class of evolutionary algorithms called mobile cellular evolutionary algorithms (mcEAs). These algorithms are characterized by individuals moving around on a spatial population structure. As a primary objective, this thesis aims to show that by controlling the population density and mobility in mcEAs, it is possible to achieve much better control over the rate of convergence than what is already possible in existing cellular EAs. Using the observations and results from this investigation into selection pressure in mcEAs, a general architecture for developing agent-based evolutionary algorithms called Artificial Ecosystems (AES) is presented. A simple agent-based EA is developed within the scope of AES is presented with two individual-based bottom-up schemes to achieve dynamic population sizing. Experiments with a test suite of optimization problems show that both mcEAs and the agent-based EA produced results comparable to the best solutions found by cellular EAs --Abstract, page iii

    Many-agent Reinforcement Learning

    Get PDF
    Multi-agent reinforcement learning (RL) solves the problem of how each agent should behave optimally in a stochastic environment in which multiple agents are learning simultaneously. It is an interdisciplinary domain with a long history that lies in the joint area of psychology, control theory, game theory, reinforcement learning, and deep learning. Following the remarkable success of the AlphaGO series in single-agent RL, 2019 was a booming year that witnessed significant advances in multi-agent RL techniques; impressive breakthroughs have been made on developing AIs that outperform humans on many challenging tasks, especially multi-player video games. Nonetheless, one of the key challenges of multi-agent RL techniques is the scalability; it is still non-trivial to design efficient learning algorithms that can solve tasks including far more than two agents (N2N \gg 2), which I name by \emph{many-agent reinforcement learning} (MARL\footnote{I use the world of ``MARL" to denote multi-agent reinforcement learning with a particular focus on the cases of many agents; otherwise, it is denoted as ``Multi-Agent RL" by default.}) problems. In this thesis, I contribute to tackling MARL problems from four aspects. Firstly, I offer a self-contained overview of multi-agent RL techniques from a game-theoretical perspective. This overview fills the research gap that most of the existing work either fails to cover the recent advances since 2010 or does not pay adequate attention to game theory, which I believe is the cornerstone to solving many-agent learning problems. Secondly, I develop a tractable policy evaluation algorithm -- αα\alpha^\alpha-Rank -- in many-agent systems. The critical advantage of αα\alpha^\alpha-Rank is that it can compute the solution concept of α\alpha-Rank tractably in multi-player general-sum games with no need to store the entire pay-off matrix. This is in contrast to classic solution concepts such as Nash equilibrium which is known to be PPADPPAD-hard in even two-player cases. αα\alpha^\alpha-Rank allows us, for the first time, to practically conduct large-scale multi-agent evaluations. Thirdly, I introduce a scalable policy learning algorithm -- mean-field MARL -- in many-agent systems. The mean-field MARL method takes advantage of the mean-field approximation from physics, and it is the first provably convergent algorithm that tries to break the curse of dimensionality for MARL tasks. With the proposed algorithm, I report the first result of solving the Ising model and multi-agent battle games through a MARL approach. Fourthly, I investigate the many-agent learning problem in open-ended meta-games (i.e., the game of a game in the policy space). Specifically, I focus on modelling the behavioural diversity in meta-games, and developing algorithms that guarantee to enlarge diversity during training. The proposed metric based on determinantal point processes serves as the first mathematically rigorous definition for diversity. Importantly, the diversity-aware learning algorithms beat the existing state-of-the-art game solvers in terms of exploitability by a large margin. On top of the algorithmic developments, I also contribute two real-world applications of MARL techniques. Specifically, I demonstrate the great potential of applying MARL to study the emergent population dynamics in nature, and model diverse and realistic interactions in autonomous driving. Both applications embody the prospect that MARL techniques could achieve huge impacts in the real physical world, outside of purely video games

    Variational Autoencoder Based Estimation Of Distribution Algorithms And Applications To Individual Based Ecosystem Modeling Using EcoSim

    Get PDF
    Individual based modeling provides a bottom up approach wherein interactions give rise to high-level phenomena in patterns equivalent to those found in nature. This method generates an immense amount of data through artificial simulation and can be made tractable by machine learning where multidimensional data is optimized and transformed. Using individual based modeling platform known as EcoSim, we modeled the abilities of elitist sexual selection and communication of fear. Data received from these experiments was reduced in dimension through use of a novel algorithm proposed by us: Variational Autoencoder based Estimation of Distribution Algorithms with Population Queue and Adaptive Variance Scaling (VAE-EDA-Q AVS). We constructed a novel Estimation of Distribution Algorithm (EDA) by extending generative models known as variational autoencoders (VAE). VAE-EDA-Q, proposed by us, smooths the data generation process using an iteratively updated queue (Q) of populations. Adaptive Variance Scaling (AVS) dynamically updates the variance at which models are sampled based on fitness. The combination of VAE-EDA-Q with AVS demonstrates high computational efficiency and requires few fitness evaluations. We extended VAE-EDA-Q AVS to act as a feature reducing wrapper method in conjunction with C4.5 Decision trees to reduce the dimensionality of data. The relationship between sexual selection, random selection, and speciation is a contested topic. Supporting evidence suggests sexual selection to drive speciation. Opposing evidence contends either a negative or absence of correlation to exist. We utilized EcoSim to model elitist and random mate selection. Our results demonstrated a significantly lower speciation rate, a significantly lower extinction rate, and a significantly higher turnover rate for sexual selection groups. Species diversification was found to display no significant difference. The relationship between communication and foraging behavior similarly features opposing hypotheses in claim of both increases and decreases of foraging behavior in response to alarm communication. Through modeling with EcoSim, we found alarm communication to decrease foraging activity in most cases, yet gradually increase foraging activity in some other cases. Furthermore, we found both outcomes resulting from alarm communication to increase fitness as compared to non-communication

    Co-Evolutionary Multi-Agent System with Speciation and Resource Sharing Mechanisms

    Get PDF
    Niching techniques for evolutionary algorithms are used in order to locate basins of attraction of the local minima of multi-modal fitness functions. Co-evolutionary techniques are aimed at overcoming limited adaptive capabilities of evolutionary algorithms resulting from the loss of useful population the idea of niching co-evolutionary multi-agent system (NCoEMAS)is introduced. In such a system the species formation phenomena occurs within one of the pre-existing species as a result of co-evolutionary interactions. The results of experiments with Rastrigin and Schwefel multi-modal test functions aimed at the comparison of NCoEMAS to other niching techniques are presented. Also, the resource sharing mechanism's parameters on the quality of speciation processes inNCoEMAS are investigated
    corecore