10,635 research outputs found

    An adaptive and modular framework for evolving deep neural networks

    Get PDF
    Santos, F. J. J. B., Gonçalves, I., & Castelli, M. (2023). Neuroevolution with box mutation: An adaptive and modular framework for evolving deep neural networks. Applied Soft Computing, 147(November), 1-15. [110767]. https://doi.org/10.1016/j.asoc.2023.110767 --- Funding: This work is funded by national funds through the FCT - Foundation for Science and Technology, I.P., within the scope of the projects CISUC - UID/CEC/00326/2020, UIDB/04152/2020 - Centro de Investigação em Gestão de Informação (MagIC)/NOVA IMS, and by European Social Fund, through the Regional Operational Program Centro 2020 .The pursuit of self-evolving neural networks has driven the emerging field of Evolutionary Deep Learning, which combines the strengths of Deep Learning and Evolutionary Computation. This work presents a novel method for evolving deep neural networks by adapting the principles of Geometric Semantic Genetic Programming, a subfield of Genetic Programming, and Semantic Learning Machine. Our approach integrates evolution seamlessly through natural selection with the optimization power of backpropagation in deep learning, enabling the incremental growth of neural networks’ neurons across generations. By evolving neural networks that achieve nearly 89% accuracy on the CIFAR-10 dataset with relatively few parameters, our method demonstrates remarkable efficiency, evolving in GPU minutes compared to the field standard of GPU days.publishersversionpublishe

    Integrating Evolutionary Computation with Neural Networks

    Get PDF
    There is a tremendous interest in the development of the evolutionary computation techniques as they are well suited to deal with optimization of functions containing a large number of variables. This paper presents a brief review of evolutionary computing techniques. It also discusses briefly the hybridization of evolutionary computation and neural networks and presents a solution of a classical problem using neural computing and evolutionary computing technique

    Forcing neurocontrollers to exploit sensory symmetry through hard-wired modularity in the game of Cellz

    Get PDF
    Several attempts have been made in the past to construct encoding schemes that allow modularity to emerge in evolving systems, but success is limited. We believe that in order to create successful and scalable encodings for emerging modularity, we first need to explore the benefits of different types of modularity by hard-wiring these into evolvable systems. In this paper we explore different ways of exploiting sensory symmetry inherent in the agent in the simple game Cellz by evolving symmetrically identical modules. It is concluded that significant increases in both speed of evolution and final fitness can be achieved relative to monolithic controllers. Furthermore, we show that a simple function approximation task that exhibits sensory symmetry can be used as a quick approximate measure of the utility of an encoding scheme for the more complex game-playing task

    Feature selection for modular GA-based classification

    Get PDF
    Genetic algorithms (GAs) have been used as conventional methods for classifiers to adaptively evolve solutions for classification problems. Feature selection plays an important role in finding relevant features in classification. In this paper, feature selection is explored with modular GA-based classification. A new feature selection technique, Relative Importance Factor (RIF), is proposed to find less relevant features in the input domain of each class module. By removing these features, it is aimed to reduce the classification error and dimensionality of classification problems. Benchmark classification data sets are used to evaluate the proposed approach. The experiment results show that RIF can be used to find less relevant features and help achieve lower classification error with the feature space dimension reduced

    Combating catastrophic forgetting with developmental compression

    Full text link
    Generally intelligent agents exhibit successful behavior across problems in several settings. Endemic in approaches to realize such intelligence in machines is catastrophic forgetting: sequential learning corrupts knowledge obtained earlier in the sequence, or tasks antagonistically compete for system resources. Methods for obviating catastrophic forgetting have sought to identify and preserve features of the system necessary to solve one problem when learning to solve another, or to enforce modularity such that minimally overlapping sub-functions contain task specific knowledge. While successful, both approaches scale poorly because they require larger architectures as the number of training instances grows, causing different parts of the system to specialize for separate subsets of the data. Here we present a method for addressing catastrophic forgetting called developmental compression. It exploits the mild impacts of developmental mutations to lessen adverse changes to previously-evolved capabilities and `compresses' specialized neural networks into a generalized one. In the absence of domain knowledge, developmental compression produces systems that avoid overt specialization, alleviating the need to engineer a bespoke system for every task permutation and suggesting better scalability than existing approaches. We validate this method on a robot control problem and hope to extend this approach to other machine learning domains in the future
    • 

    corecore