126,412 research outputs found

    Robustness in artificial life

    Get PDF
    Finding robust explanations of behaviours in Alife and related fields is made difficult by the lack of any formalised definition of robustness. A concerted effort to develop a framework which allows for robust explanations of those behaviours to be developed is needed, as well as a discussion of what constitutes a potentially useful definition for behavioural robustness. To this end, we describe two senses of robustness: robustness in systems; and robustness in explanation. We then propose a framework for developing robust explanations using linked sets of models, and describe a programme of research incorporating both robotics and chemical experiments which is designed to investigate robustness in systems

    Kriging Models That Are Robust With Respect to Simulation Errors

    Get PDF
    In the field of the Design and Analysis of Computer Experiments (DACE) meta-models are used to approximate time-consuming simulations. These simulations often contain simulation-model errors in the output variables. In the construction of meta-models, these errors are often ignored. Simulation-model errors may be magnified by the meta-model. Therefore, in this paper, we study the construction of Kriging models that are robust with respect to simulation-model errors. We introduce a robustness criterion, to quantify the robustness of a Kriging model. Based on this robustness criterion, two new methods to find robust Kriging models are introduced. We illustrate these methods with the approximation of the Six-hump camel back function and a real life example. Furthermore, we validate the two methods by simulating artificial perturbations. Finally, we consider the influence of the Design of Computer Experiments (DoCE) on the robustness of Kriging models.Kriging;robustness;simulation-model error

    Evolving spiking neural networks for temporal pattern recognition in the presence of noise

    Get PDF
    Creative Commons - Attribution-NonCommercial-NoDerivs 3.0 United StatesNervous systems of biological organisms use temporal patterns of spikes to encode sensory input, but the mechanisms that underlie the recognition of such patterns are unclear. In the present work, we explore how networks of spiking neurons can be evolved to recognize temporal input patterns without being able to adjust signal conduction delays. We evolve the networks with GReaNs, an artificial life platform that encodes the topology of the network (and the weights of connections) in a fashion inspired by the encoding of gene regulatory networks in biological genomes. The number of computational nodes or connections is not limited in GReaNs, but here we limit the size of the networks to analyze the functioning of the networks and the effect of network size on the evolvability of robustness to noise. Our results show that even very small networks of spiking neurons can perform temporal pattern recognition in the presence of input noiseFinal Published versio

    Kriging Models That Are Robust With Respect to Simulation Errors

    Get PDF
    In the field of the Design and Analysis of Computer Experiments (DACE) meta-models are used to approximate time-consuming simulations. These simulations often contain simulation-model errors in the output variables. In the construction of meta-models, these errors are often ignored. Simulation-model errors may be magnified by the meta-model. Therefore, in this paper, we study the construction of Kriging models that are robust with respect to simulation-model errors. We introduce a robustness criterion, to quantify the robustness of a Kriging model. Based on this robustness criterion, two new methods to find robust Kriging models are introduced. We illustrate these methods with the approximation of the Six-hump camel back function and a real life example. Furthermore, we validate the two methods by simulating artificial perturbations. Finally, we consider the influence of the Design of Computer Experiments (DoCE) on the robustness of Kriging models

    Social learning in a multi-agent system

    No full text
    In a persistent multi-agent system, it should be possible for new agents to benefit from the accumulated learning of more experienced agents. Parallel reasoning can be applied to the case of newborn animals, and thus the biological literature on social learning may aid in the construction of effective multi-agent systems. Biologists have looked at both the functions of social learning and the mechanisms that enable it. Many researchers have focused on the cognitively complex mechanism of imitation; we will also consider a range of simpler mechanisms that could more easily be implemented in robotic or software agents. Research in artificial life shows that complex global phenomena can arise from simple local rules. Similarly, complex information sharing at the system level may result from quite simple individual learning rules. We demonstrate in simulation that simple mechanisms can outperform imitation in a multi-agent system, and that the effectiveness of any social learning strategy will depend on the agents' environment. Our simple mechanisms have obvious advantages in terms of robustness and design costs
    corecore