4,672 research outputs found

    Towards the Evolution of Multi-Layered Neural Networks: A Dynamic Structured Grammatical Evolution Approach

    Full text link
    Current grammar-based NeuroEvolution approaches have several shortcomings. On the one hand, they do not allow the generation of Artificial Neural Networks (ANNs composed of more than one hidden-layer. On the other, there is no way to evolve networks with more than one output neuron. To properly evolve ANNs with more than one hidden-layer and multiple output nodes there is the need to know the number of neurons available in previous layers. In this paper we introduce Dynamic Structured Grammatical Evolution (DSGE): a new genotypic representation that overcomes the aforementioned limitations. By enabling the creation of dynamic rules that specify the connection possibilities of each neuron, the methodology enables the evolution of multi-layered ANNs with more than one output neuron. Results in different classification problems show that DSGE evolves effective single and multi-layered ANNs, with a varying number of output neurons

    Self-adaptive exploration in evolutionary search

    Full text link
    We address a primary question of computational as well as biological research on evolution: How can an exploration strategy adapt in such a way as to exploit the information gained about the problem at hand? We first introduce an integrated formalism of evolutionary search which provides a unified view on different specific approaches. On this basis we discuss the implications of indirect modeling (via a ``genotype-phenotype mapping'') on the exploration strategy. Notions such as modularity, pleiotropy and functional phenotypic complex are discussed as implications. Then, rigorously reflecting the notion of self-adaptability, we introduce a new definition that captures self-adaptability of exploration: different genotypes that map to the same phenotype may represent (also topologically) different exploration strategies; self-adaptability requires a variation of exploration strategies along such a ``neutral space''. By this definition, the concept of neutrality becomes a central concern of this paper. Finally, we present examples of these concepts: For a specific grammar-type encoding, we observe a large variability of exploration strategies for a fixed phenotype, and a self-adaptive drift towards short representations with highly structured exploration strategy that matches the ``problem's structure''.Comment: 24 pages, 5 figure

    Metaheuristic design of feedforward neural networks: a review of two decades of research

    Get PDF
    Over the past two decades, the feedforward neural network (FNN) optimization has been a key interest among the researchers and practitioners of multiple disciplines. The FNN optimization is often viewed from the various perspectives: the optimization of weights, network architecture, activation nodes, learning parameters, learning environment, etc. Researchers adopted such different viewpoints mainly to improve the FNN's generalization ability. The gradient-descent algorithm such as backpropagation has been widely applied to optimize the FNNs. Its success is evident from the FNN's application to numerous real-world problems. However, due to the limitations of the gradient-based optimization methods, the metaheuristic algorithms including the evolutionary algorithms, swarm intelligence, etc., are still being widely explored by the researchers aiming to obtain generalized FNN for a given problem. This article attempts to summarize a broad spectrum of FNN optimization methodologies including conventional and metaheuristic approaches. This article also tries to connect various research directions emerged out of the FNN optimization practices, such as evolving neural network (NN), cooperative coevolution NN, complex-valued NN, deep learning, extreme learning machine, quantum NN, etc. Additionally, it provides interesting research challenges for future research to cope-up with the present information processing era

    A Neurogenetic Algorithm Based on Rational Agents

    Get PDF
    Lately, a lot of research has been conducted on the automatic design of artificial neural networks (ADANNs) using evolutionary algorithms, in the so-called neuro-evolutive algorithms (NEAs). Many of the presented proposals are not biologically inspired and are not able to generate modular, hierarchical and recurrent neural structures, such as those often found in living beings capable of solving intricate survival problems. Bearing in mind the idea that a nervous system's design and organization is a constructive process carried out by genetic information encoded in DNA, this paper proposes a biologically inspired NEA that evolves ANNs using these ideas as computational design techniques. In order to do this, we propose a Lindenmayer System with memory that implements the principles of organization, modularity, repetition (multiple use of the same sub-structure), hierarchy (recursive composition of sub-structures), minimizing the scalability problem of other methods. In our method, the basic neural codification is integrated to a genetic algorithm (GA) that implements the constructive approach found in the evolutionary process, making it closest to biological processes. Thus, the proposed method is a decision-making (DM) process, the fitness function of the NEA rewards economical artificial neural networks (ANNs) that are easily implemented. In other words, the penalty approach implemented through the fitness function automatically rewards the economical ANNs with stronger generalization and extrapolation capacities. Our method was initially tested on a simple, but non-trivial, XOR problem. We also submit our method to two other problems of increasing complexity: time series prediction that represents consumer price index and prediction of the effect of a new drug on breast cancer. In most cases, our NEA outperformed the other methods, delivering the most accurate classification. These superior results are attributed to the improved effectiveness and efficiency of NEA in the decision-making process. The result is an optimized neural network architecture for solving classification problems
    corecore