4 research outputs found

    Morphogenetic Engineering For Evolving Ant Colony Pheromone Communication

    Get PDF
    This research investigates methods for evolving swarm communication in a simulated colony of ants using pheromone when foriaging for food. This research implemented neuroevolution and obtained the capability to learn pheromone communication autonomously. Building on previous literature on pheromone communication, this research applies evolution to adjust the topology and weights of an artificial neural network which controls the ant behaviour. Comparison of performance is made between a hard-coded benchmark algorithm, a fixed topology ANN and neuroevolution of the ANN topology and weights. The resulting neuroevolution produced a neural network which was successfully evolved to achieve the task objective, to collect food and return it to the nest

    Swarm Communication by Evolutionary Algorithms

    Get PDF
    © 2018 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.This research has applied evolutionary algorithms to evolve swarm communication. Controllers were evolved for colonies of artificial simulated ants during a food foriaging task which communicate using pheromone. Neuroevolution enables both weights and the topology of the artificial neural networks to be optimized for food foriaging. The developed model results in evolution of ants which communicate using pheromone trails. The ants successfully collect and return food to the nest. The controller has evolved to adjust the strength of pheromone which provides a signal to guide the direction of other ants in the colony by hill climbing strategy. A single ANN controller for ant direction successfully evolved which exhibits many separate skills including food search, pheromone following, food collection and retrieval to the nest

    Biologically inspired computational structures and processes for autonomous agents and robots

    Get PDF
    Recent years have seen a proliferation of intelligent agent applications: from robots for space exploration to software agents for information filtering and electronic commerce on the Internet. Although the scope of these agent applications have blossomed tremendously since the advent of compact, affordable computing (and the recent emergence of the World Wide Web), the design of such agents for specific applications remains a daunting engineering problem;Rather than approach the design of artificial agents from a purely engineering standpoint, this dissertation views animals as biological agents, and considers artificial analogs of biological structures and processes in the design of effective agent behaviors. In particular, it explores behaviors generated by artificial neural structures appropriately shaped by the processes of evolution and spatial learning;The first part of this dissertation deals with the evolution of artificial neural controllers for a box-pushing robot task. We show that evolution discovers high fitness structures using little domain-specific knowledge, even in feedback-impoverished environments. Through a careful analysis of the evolved designs we also show how evolution exploits the environmental constraints and properties to produce designs of superior adaptive value. By modifying the task constraints in controlled ways, we also show the ability of evolution to quickly adapt to these changes and exploit them to obtain significant performance gains. We also use evolution to design the sensory systems of the box-pushing robots, particularly the number, placement, and ranges of their sensors. We find that evolution automatically discards unnecessary sensors retaining only the ones that appear to significantly affect the performance of the robot. This optimization of design across multiple dimensions (performance, number of sensors, size of neural controller, etc.) is implicitly achieved by the evolutionary algorithm without any external pressure (e.g., penalty on the use of more sensors or neurocontroller units). When used in the design of robots with limited battery capacities , evolution produces energy-efficient robot designs that use minimal numbers of components and yet perform reasonably well. The performance as well as the complexity of robot designs increase when the robots have access to a spatial learning mechanism that allows them to learn, remember, and navigate to power sources in the environment;The second part of this dissertation develops a computational characterization of the hippocampal formation which is known to play a significant role in animal spatial learning. The model is based on neuroscientific and behavioral data, and learns place maps based on interactions of sensory and dead-reckoning information streams. Using an estimation mechanism known as Kalman filtering, the model explicitly deals with uncertainties in the two information streams, allowing the robot to effectively learn and localize even in the presence sensing and motion errors. Additionally, the model has mechanisms to handle perceptual aliasing problems (where multiple places in the environment appear sensorily identical), incrementally learn and integrate local place maps, and learn and remember multiple goal locations in the environment. We show a number of properties of this spatial learning model including computational replication of several behavioral experiments performed with rodents. Not only does this model make significant contributions to robot localization, but also offers a number of predictions and suggestions that can be validated (or refuted) through systematic neurobiological and behavioral experiments with animals

    An Artificial Neural Network Representation for Artificial Organisms

    No full text
    We introduce an artificial neural network (ANN) representation that supports the evolution of complex behaviors in artificial organisms. The strength and location of each connection in the network is specified by a connection descriptor. The connection descriptors are mapped directly into a bit--string to which a genetic algorithm is applied. We empirically compare this representation to other ANN--based representations in the complex AntFarm task. 1 Introduction The behavior of artificial organisms can be evolved with a genetic algorithm. The function that converts an organism's sensory inputs (gathered from the simulated environment) to motor outputs (behavior) is encoded as a bit--string, and the genetic algorithm is applied to the population of bit--strings. The genetic algorithm evaluates the fitness of each string by translating the string into the corresponding behavior function, and placing the resulting organism in the environment in order to see how "successful" it is (e.g. ..
    corecore