9,354 research outputs found

    Fitness landscape of the cellular automata majority problem: View from the Olympus

    Get PDF
    In this paper we study cellular automata (CAs) that perform the computational Majority task. This task is a good example of what the phenomenon of emergence in complex systems is. We take an interest in the reasons that make this particular fitness landscape a difficult one. The first goal is to study the landscape as such, and thus it is ideally independent from the actual heuristics used to search the space. However, a second goal is to understand the features a good search technique for this particular problem space should possess. We statistically quantify in various ways the degree of difficulty of searching this landscape. Due to neutrality, investigations based on sampling techniques on the whole landscape are difficult to conduct. So, we go exploring the landscape from the top. Although it has been proved that no CA can perform the task perfectly, several efficient CAs for this task have been found. Exploiting similarities between these CAs and symmetries in the landscape, we define the Olympus landscape which is regarded as the ''heavenly home'' of the best local optima known (blok). Then we measure several properties of this subspace. Although it is easier to find relevant CAs in this subspace than in the overall landscape, there are structural reasons that prevent a searcher from finding overfitted CAs in the Olympus. Finally, we study dynamics and performance of genetic algorithms on the Olympus in order to confirm our analysis and to find efficient CAs for the Majority problem with low computational cost

    High-speed detection of emergent market clustering via an unsupervised parallel genetic algorithm

    Full text link
    We implement a master-slave parallel genetic algorithm (PGA) with a bespoke log-likelihood fitness function to identify emergent clusters within price evolutions. We use graphics processing units (GPUs) to implement a PGA and visualise the results using disjoint minimal spanning trees (MSTs). We demonstrate that our GPU PGA, implemented on a commercially available general purpose GPU, is able to recover stock clusters in sub-second speed, based on a subset of stocks in the South African market. This represents a pragmatic choice for low-cost, scalable parallel computing and is significantly faster than a prototype serial implementation in an optimised C-based fourth-generation programming language, although the results are not directly comparable due to compiler differences. Combined with fast online intraday correlation matrix estimation from high frequency data for cluster identification, the proposed implementation offers cost-effective, near-real-time risk assessment for financial practitioners.Comment: 10 pages, 5 figures, 4 tables, More thorough discussion of implementatio

    The Emergence of Canalization and Evolvability in an Open-Ended, Interactive Evolutionary System

    Full text link
    Natural evolution has produced a tremendous diversity of functional organisms. Many believe an essential component of this process was the evolution of evolvability, whereby evolution speeds up its ability to innovate by generating a more adaptive pool of offspring. One hypothesized mechanism for evolvability is developmental canalization, wherein certain dimensions of variation become more likely to be traversed and others are prevented from being explored (e.g. offspring tend to have similarly sized legs, and mutations affect the length of both legs, not each leg individually). While ubiquitous in nature, canalization almost never evolves in computational simulations of evolution. Not only does that deprive us of in silico models in which to study the evolution of evolvability, but it also raises the question of which conditions give rise to this form of evolvability. Answering this question would shed light on why such evolvability emerged naturally and could accelerate engineering efforts to harness evolution to solve important engineering challenges. In this paper we reveal a unique system in which canalization did emerge in computational evolution. We document that genomes entrench certain dimensions of variation that were frequently explored during their evolutionary history. The genetic representation of these organisms also evolved to be highly modular and hierarchical, and we show that these organizational properties correlate with increased fitness. Interestingly, the type of computational evolutionary experiment that produced this evolvability was very different from traditional digital evolution in that there was no objective, suggesting that open-ended, divergent evolutionary processes may be necessary for the evolution of evolvability.Comment: SI can be found at: http://www.evolvingai.org/files/SI_0.zi

    Evolutionary-based sparse regression for the experimental identification of duffing oscillator

    Get PDF
    In this paper, an evolutionary-based sparse regression algorithm is proposed and applied onto experimental data collected from a Duffing oscillator setup and numerical simulation data. Our purpose is to identify the Coulomb friction terms as part of the ordinary differential equation of the system. Correct identification of this nonlinear system using sparse identification is hugely dependent on selecting the correct form of nonlinearity included in the function library. Consequently, in this work, the evolutionary-based sparse identification is replacing the need for user knowledge when constructing the library in sparse identification. Constructing the library based on the data-driven evolutionary approach is an effective way to extend the space of nonlinear functions, allowing for the sparse regression to be applied on an extensive space of functions. The results show that the method provides an effective algorithm for the purpose of unveiling the physical nature of the Duffing oscillator. In addition, the robustness of the identification algorithm is investigated for various levels of noise in simulation. The proposed method has possible applications to other nonlinear dynamic systems in mechatronics, robotics, and electronics

    Metaheuristic design of feedforward neural networks: a review of two decades of research

    Get PDF
    Over the past two decades, the feedforward neural network (FNN) optimization has been a key interest among the researchers and practitioners of multiple disciplines. The FNN optimization is often viewed from the various perspectives: the optimization of weights, network architecture, activation nodes, learning parameters, learning environment, etc. Researchers adopted such different viewpoints mainly to improve the FNN's generalization ability. The gradient-descent algorithm such as backpropagation has been widely applied to optimize the FNNs. Its success is evident from the FNN's application to numerous real-world problems. However, due to the limitations of the gradient-based optimization methods, the metaheuristic algorithms including the evolutionary algorithms, swarm intelligence, etc., are still being widely explored by the researchers aiming to obtain generalized FNN for a given problem. This article attempts to summarize a broad spectrum of FNN optimization methodologies including conventional and metaheuristic approaches. This article also tries to connect various research directions emerged out of the FNN optimization practices, such as evolving neural network (NN), cooperative coevolution NN, complex-valued NN, deep learning, extreme learning machine, quantum NN, etc. Additionally, it provides interesting research challenges for future research to cope-up with the present information processing era

    On linear genetic programming

    Get PDF
    The thesis is about linear genetic programming (LGP), a machine learning approach that evolves computer programs as sequences of imperative instructions. Two fundamental differences to the more commontree-based variant (TGP) may be identified. These are the graph-based functional structure of linear genetic programs, on the one hand, and the existence of structurally noneffective code, on the other hand.The two major objectives of this work comprise(1) the development of more advanced methods and variation operators to produce better and more compact program solutions and (2) the analysis of general EA/GP phenomena in linear GP, including intron code, neutral variations, and code growth, among others.First, we introduce efficient algorithms for extracting features of the imperative and functional structure of linear genetic programs.In doing so, especially the detection and elimination of noneffective code during runtime will turn out as a powerful tool to accelerate the time-consuming step of fitness evaluation in GP.Variation operators are discussed systematically for the linear program representation. We will demonstrate that so called effective instruction mutations achieve the best performance in terms of solution quality.These mutations operate only on the (structurally) effective codeand restrict the mutation step size to one instruction.One possibility to further improve their performance is to explicitly increase the probability of neutral variations. As a second, more time-efficient alternative we explicitly controlthe mutation step size on the effective code (effective step size).Minimum steps do not allow more than one effective instruction to change its effectiveness status. That is, only a single node may beconnected to or disconnected from the effective graph component. It is an interesting phenomenon that, to some extent, the effective code becomes more robust against destructions over the generations already implicitly. A special concern of this thesis is to convince the reader that thereare some serious arguments for using a linear representation.In a crossover-based comparison LGP has been found superior to TGPover a set of benchmark problems. Furthermore, linear solutions turned out to be more compact than tree solutions due to (1) multiple usage of subgraph results and (2) implicit parsimony pressure by structurally noneffective code.The phenomenon of code growth is analyzed for different lineargenetic operators. When applying instruction mutations exclusivelyalmost only neutral variations may be held responsible for the emergence and propagation of intron code. It is noteworthy that linear geneticprograms may not grow if all neutral variation effects are rejected and if the variation step size is minimum.For the same reasons effective instruction mutations realize an implicit complexity control in linear GP which reduces a possible negative effect of code growth to a minimum.Another noteworthy result in this context is that program size is strongly increased by crossover while it is hardly influenced by mutation even if step sizes are not explicitly restricted. Finally, we investigate program teams as one possibility to increasethe dimension of genetic programs. It will be demonstrated that muchmore powerful solutions may be found by teams than by individuals. Moreover, the complexity of team solutions remains surprisingly small compared to individual programs. Both is the result of specialization and cooperation of team members

    A Genetic Programming Approach to Geometrical Digital Content Modeling in Web Oriented Applications

    Get PDF
    The paper presents the advantages of using genetic techniques in web oriented problems. The specific area of genetic programming applications that paper approaches is content modeling. The analyzed digital content is formed through the accumulation of targeted geometrical structured entities that have specific characteristics and behavior. The accumulated digital content is analyzed and specific features are extracted in order to develop an analysis system through the use of genetic programming. An experiment is presented which evolves a model based on specific features of each geometrical structured entity in the digital content base. The results show promising expectations with a low error rate which provides fair approximations related to analyzed geometrical structured entities.Genetic Algorithm, Genetic Programming, Fitness, Geometrical Structured Entities, Analysis

    Symbolic regression of generative network models

    Full text link
    Networks are a powerful abstraction with applicability to a variety of scientific fields. Models explaining their morphology and growth processes permit a wide range of phenomena to be more systematically analysed and understood. At the same time, creating such models is often challenging and requires insights that may be counter-intuitive. Yet there currently exists no general method to arrive at better models. We have developed an approach to automatically detect realistic decentralised network growth models from empirical data, employing a machine learning technique inspired by natural selection and defining a unified formalism to describe such models as computer programs. As the proposed method is completely general and does not assume any pre-existing models, it can be applied "out of the box" to any given network. To validate our approach empirically, we systematically rediscover pre-defined growth laws underlying several canonical network generation models and credible laws for diverse real-world networks. We were able to find programs that are simple enough to lead to an actual understanding of the mechanisms proposed, namely for a simple brain and a social network
    corecore