52,181 research outputs found

    How nouns and verbs differentially affect the behavior of artificial organisms

    Get PDF
    This paper presents an Artificial Life and Neural Network (ALNN) model for the evolution of syntax. The simulation methodology provides a unifying approach for the study of the evolution of language and its interaction with other behavioral and neural factors. The model uses an object manipulation task to simulate the evolution of language based on a simple verb-noun rule. The analyses of results focus on the interaction between language and other non-linguistic abilities, and on the neural control of linguistic abilities. The model shows that the beneficial effects of language on non-linguistic behavior are explained by the emergence of distinct internal representation patterns for the processing of verbs and nouns

    Cultural Learning in a Dynamic Environment: an Analysis of Both Fitness and Diversity in Populations of Neural Network Agents

    Get PDF
    Evolutionary learning is a learning model that can be described as the iterative Darwinian process of fitness-based selection and genetic transfer of information leading to populations of higher fitness. Cultural learning describes the process of information transfer between individuals in a population through non-genetic means. Cultural learning has been simulated by combining genetic algorithms and neural networks using a teacher/pupil scenario where highly fit individuals are selected as teachers and instruct the next generation. This paper examines the effects of cultural learning on the evolutionary process of a population of neural networks. In particular, the paper examines the genotypic and phenotypic diversity of a population as well as its fitness. Using these measurements, it is possible to examine the effects of cultural learning on the population's genetic makeup. Furthermore, the paper examines whether cultural learning provides a more robust learning mechanism in the face of environmental changes. Three benchmark tasks have been chosen as the evolutionary task for the population: the bit-parity problem, the game of tic-tac-toe and the game of connect-four. Experiments are conducted with populations employing evolutionary learning alone and populations combining evolutionary and cultural learning in an environment that changes dramatically.Cultural Learning, Dynamic Environments, Diversity, Multi-Agent Systems, Artificial Life

    Digital Ecosystems: Ecosystem-Oriented Architectures

    Full text link
    We view Digital Ecosystems to be the digital counterparts of biological ecosystems. Here, we are concerned with the creation of these Digital Ecosystems, exploiting the self-organising properties of biological ecosystems to evolve high-level software applications. Therefore, we created the Digital Ecosystem, a novel optimisation technique inspired by biological ecosystems, where the optimisation works at two levels: a first optimisation, migration of agents which are distributed in a decentralised peer-to-peer network, operating continuously in time; this process feeds a second optimisation based on evolutionary computing that operates locally on single peers and is aimed at finding solutions to satisfy locally relevant constraints. The Digital Ecosystem was then measured experimentally through simulations, with measures originating from theoretical ecology, evaluating its likeness to biological ecosystems. This included its responsiveness to requests for applications from the user base, as a measure of the ecological succession (ecosystem maturity). Overall, we have advanced the understanding of Digital Ecosystems, creating Ecosystem-Oriented Architectures where the word ecosystem is more than just a metaphor.Comment: 39 pages, 26 figures, journa

    Evolving Neural Networks for the Capture Game

    Get PDF
    Postprin

    Endogenous Networks in Random Population Games

    Get PDF
    Population learning in dynamic economies has been traditionally studied in over-simplified settings where payoff landscapes are very smooth. Indeed, in these models, all agents play the same bilateral stage-game against any opponent and stage-game payoffs reflect very simple strategic situations (e.g. coordination). In this paper, we address a preliminary investigation of dynamic population games over `rugged' landscapes, where agents face a strong uncertainty about expected payoffs from bilateral interactions. We propose a simple model where individual payoffs from playing a binary action against everyone else are distributed as a i.i.d. U[0,1] r.v.. We call this setting a `random population game' and we study population adaptation over time when agents can update both actions and partners using deterministic, myopic, best reply rules. We assume that agents evaluate payoffs associated to networks where an agent is not linked with everyone else by using simple rules (i.e. statistics) computed on the distributions of payoffs associated to all possible action combinations performed by agents outside the interaction set. We investigate the long-run properties of the system by means of computer simulations. We show that: (i) allowing for endogenous networks implies higher average payoff as compared to "frozen" networks; (ii) the statistics employed to evaluate payoffs strongly affect the efficiency of the system, i.e. convergence to a unique (multiple) steady-state(s) or not; (iii) for some class of statistics (e.g. MIN or MAX), the likelihood of efficient population learning strongly depends on whether agents are change-averse or not in discriminating between options delivering the same expected payoff.Dynamic Population Games, Bounded Rationality, Endogenous Networks, Fitness Landscapes, Evolutionary Environments, Adaptive Expectations.

    A molecular approach to complex adaptive systems

    Get PDF
    Complex Adaptive Systems (CAS) are dynamical networks of interacting agents which as a whole determine the behavior, adaptivity and cognitive ability of the system. CAS are ubiquitous and occur in a variety of natural and artificial systems (e.g., cells, societies, stock markets). To study CAS, Holland proposed to employ an agent-based system in which Learning Classifier Systems (LCS) were used to determine the agents behavior and adaptivity. We argue that LCS are limited for the study of CAS: the rule-discovery mechanism is pre-specified and may limit the evolvability of CAS. Secondly, LCS distinguish a demarcation between messages and rules, however operations are reflexive in CAS, e.g., in a cell, an agent (a molecule) may both act as a message (substrate) and as a catalyst (rule). To address these issues, we proposed the Molecular Classifier Systems (MCS.b), a string-based Artificial Chemistry based on Hollandā€™s broadcast language. In the MCS.b, no explicit fitness function or rule discovery mechanism is specified, moreover no distinction is made between messages and rules. In the context of the ESIGNET project, we employ the MCS.b to study a subclass of CAS: Cell Signaling Networks (CSNs) which are complex biochemical networks responsible for coordinating cellular activities. As CSNs occur in cells, these networks must replicate themselves prior to cell division. In this paper we present a series of experiments focusing on the self-replication ability of these CAS. Results indicate counter intuitive outcomes as opposed to those inferred from the literature. This work highlights the current deficit of a theoretical framework for the study of Artificial Chemistries

    Open Problems in the Emergence and Evolution of Linguistic Communication: A Road-Map for Research

    Get PDF

    Combating catastrophic forgetting with developmental compression

    Full text link
    Generally intelligent agents exhibit successful behavior across problems in several settings. Endemic in approaches to realize such intelligence in machines is catastrophic forgetting: sequential learning corrupts knowledge obtained earlier in the sequence, or tasks antagonistically compete for system resources. Methods for obviating catastrophic forgetting have sought to identify and preserve features of the system necessary to solve one problem when learning to solve another, or to enforce modularity such that minimally overlapping sub-functions contain task specific knowledge. While successful, both approaches scale poorly because they require larger architectures as the number of training instances grows, causing different parts of the system to specialize for separate subsets of the data. Here we present a method for addressing catastrophic forgetting called developmental compression. It exploits the mild impacts of developmental mutations to lessen adverse changes to previously-evolved capabilities and `compresses' specialized neural networks into a generalized one. In the absence of domain knowledge, developmental compression produces systems that avoid overt specialization, alleviating the need to engineer a bespoke system for every task permutation and suggesting better scalability than existing approaches. We validate this method on a robot control problem and hope to extend this approach to other machine learning domains in the future

    Largenet2: an object-oriented programming library for simulating large adaptive networks

    Full text link
    The largenet2 C++ library provides an infrastructure for the simulation of large dynamic and adaptive networks with discrete node and link states. The library is released as free software. It is available at http://rincedd.github.com/largenet2. Largenet2 is licensed under the Creative Commons Attribution-NonCommercial 3.0 Unported License.Comment: 2 pages, 1 figur
    • ā€¦
    corecore