18 research outputs found

    Unsupervised Feature Learning through Divergent Discriminative Feature Accumulation

    Full text link
    Unlike unsupervised approaches such as autoencoders that learn to reconstruct their inputs, this paper introduces an alternative approach to unsupervised feature learning called divergent discriminative feature accumulation (DDFA) that instead continually accumulates features that make novel discriminations among the training set. Thus DDFA features are inherently discriminative from the start even though they are trained without knowledge of the ultimate classification problem. Interestingly, DDFA also continues to add new features indefinitely (so it does not depend on a hidden layer size), is not based on minimizing error, and is inherently divergent instead of convergent, thereby providing a unique direction of research for unsupervised feature learning. In this paper the quality of its learned features is demonstrated on the MNIST dataset, where its performance confirms that indeed DDFA is a viable technique for learning useful features.Comment: Corrected citation formattin

    The Evolution of Neural Network-Based Chart Patterns: A Preliminary Study

    Full text link
    A neural network-based chart pattern represents adaptive parametric features, including non-linear transformations, and a template that can be applied in the feature space. The search of neural network-based chart patterns has been unexplored despite its potential expressiveness. In this paper, we formulate a general chart pattern search problem to enable cross-representational quantitative comparison of various search schemes. We suggest a HyperNEAT framework applying state-of-the-art deep neural network techniques to find attractive neural network-based chart patterns; These techniques enable a fast evaluation and search of robust patterns, as well as bringing a performance gain. The proposed framework successfully found attractive patterns on the Korean stock market. We compared newly found patterns with those found by different search schemes, showing the proposed approach has potential.Comment: 8 pages, In proceedings of Genetic and Evolutionary Computation Conference (GECCO 2017), Berlin, German

    Unshackling evolution: evolving soft robots with multiple materials and a powerful generative encoding

    Get PDF
    In 1994 Karl Sims showed that computational evolution can produce interesting morphologies that resemble natural organisms. Despite nearly two decades of work since, evolved morphologies are not obviously more complex or natural, and the field seems to have hit a complexity ceiling. One hypothesis for the lack of increased complexity is that most work, including Simsā€™, evolves morphologies composed of rigid elements, such as solid cubes and cylinders, limiting the design space. A second hypothesis is that the encodings of previous work have been overly regular, not allowing complex regularities with variation. Here we test both hypotheses by evolving soft robots with multiple materials and a powerful generative encoding called a compositional pattern-producing network (CPPN). Robots are selected for locomotion speed. We find that CPPNs evolve faster robots than a direct encoding and that the CPPN morphologies appear more natural. We also find that locomotion performance increases as more materials are added, that diversity of form and behavior can be increased with diā†µerent cost functions without stifling performance, and that organisms can be evolved at diā†µerent levels of resolution. These findings suggest the ability of generative soft-voxel systems to scale towards evolving a large diversity of complex, natural, multi-material creatures. Our results suggest that future work that combines the evolution of CPPNencoded soft, multi-material robots with modern diversityencouraging techniques could finally enable the creation of creatures far more complex and interesting than those produced by Sims nearly twenty years ago

    Does Aligning Phenotypic and Genotypic Modularity Improve the Evolution of Neural Networks?

    Get PDF
    International audienceMany argue that to evolve artificial intelligence that rivals that of natural animals, we need to evolve neural networks that are structurally organized in that they exhibit modularity, regularity, and hierarchy. It was recently shown that a cost for network connections, which encourages the evolution of modularity, can be combined with an indirect encoding , which encourages the evolution of regularity, to evolve networks that are both modular and regular. However, the bias towards regularity from indirect encodings may prevent evolution from independently optimizing diā†µerent modules to perform different functions, unless modularity in the phenotype is aligned with modularity in the genotype. We test this hypothesis on two multi-modal problemsā€”a pattern recognition task and a robotics taskā€”that each require diā†µerent phenotypic modules. In general, we find that performance is improved only when genotypic and phenotypic modularity are encouraged simultaneously, though the role of alignment remains unclear. In addition, intuitive manual decompositions fail to provide the performance benefits of automatic methods on the more challenging robotics problem , emphasizing the importance of automatic, rather than manual, decomposition methods. These results suggest encouraging modularity in both the genotype and phenotype as an important step towards solving large-scale multi-modal problems, but also indicate that more research is required before we can evolve structurally organized networks to solve tasks that require multiple, different neural modules

    Guiding Neuroevolution with Structural Objectives

    Full text link
    The structure and performance of neural networks are intimately connected, and by use of evolutionary algorithms, neural network structures optimally adapted to a given task can be explored. Guiding such neuroevolution with additional objectives related to network structure has been shown to improve performance in some cases, especially when modular neural networks are beneficial. However, apart from objectives aiming to make networks more modular, such structural objectives have not been widely explored. We propose two new structural objectives and test their ability to guide evolving neural networks on two problems which can benefit from decomposition into subtasks. The first structural objective guides evolution to align neural networks with a user-recommended decomposition pattern. Intuitively, this should be a powerful guiding target for problems where human users can easily identify a structure. The second structural objective guides evolution towards a population with a high diversity in decomposition patterns. This results in exploration of many different ways to decompose a problem, allowing evolution to find good decompositions faster. Tests on our target problems reveal that both methods perform well on a problem with a very clear and decomposable structure. However, on a problem where the optimal decomposition is less obvious, the structural diversity objective is found to outcompete other structural objectives -- and this technique can even increase performance on problems without any decomposable structure at all

    A novel generative encoding for evolving modular, regular and scalable networks

    Full text link
    In this paper we introduce the Developmental Symbolic En-coding (DSE), a new generative encoding for evolving net-works (e.g. neural or boolean). DSE combines elements of two powerful generative encodings, Cellular Encoding and HyperNEAT, in order to evolve networks that are modular, regular, scale-free, and scalable. Generating networks with these properties is important because they can enhance per-formance and evolvability. We test DSEā€™s ability to gener-ate scale-free and modular networks by explicitly rewarding these properties and seeing whether evolution can produce networks that possess them. We compare the networks DSE evolves to those of HyperNEAT. The results show that both encodings can produce scale-free networks, although DSE performs slightly, but significantly, better on this objective. DSE networks are far more modular than HyperNEAT net-works. Both encodings produce regular networks. We fur-ther demonstrate that individual DSE genomes during de-velopment can scale up a network pattern to accommodate different numbers of inputs. We also compare DSE to Hyper-NEAT on a pattern recognition problem. DSE significantly outperforms HyperNEAT, suggesting that its potential lay not just in the properties of the networks it produces, but also because it can compete with leading encodings at solv-ing challenging problems. These preliminary results imply that DSE is an interesting new encoding worthy of additional study. The results also raise questions about which network properties are more likely to be produced by different types of generative encodings

    Training Neural Networks Through the Integration of Evolution and Gradient Descent

    Get PDF
    Neural networks have achieved widespread adoption due to both their applicability to a wide range of problems and their success relative to other machine learning algorithms. The training of neural networks is achieved through any of several paradigms, most prominently gradient-based approaches (including deep learning), but also through up-and-coming approaches like neuroevolution. However, while both of these neural network training paradigms have seen major improvements over the past decade, little work has been invested in developing algorithms that incorporate the advances from both deep learning and neuroevolution. This dissertation introduces two new algorithms that are steps towards the integration of gradient descent and neuroevolution for training neural networks. The first is (1) the Limited Evaluation Evolutionary Algorithm (LEEA), which implements a novel form of evolution where individuals are partially evaluated, allowing rapid learning and enabling the evolutionary algorithm to behave more like gradient descent. This conception provides a critical stepping stone to future algorithms that more tightly couple evolutionary and gradient descent components. The second major algorithm (2) is Divergent Discriminative Feature Accumulation (DDFA), which combines a neuroevolution phase, where features are collected in an unsupervised manner, with a gradient descent phase for fine tuning of the neural network weights. The neuroevolution phase of DDFA utilizes an indirect encoding and novelty search, which are sophisticated neuroevolution components rarely incorporated into gradient descent-based systems. Further contributions of this work that build on DDFA include (3) an empirical analysis to identify an effective distance function for novelty search in high dimensions and (4) the extension of DDFA for the purpose of discovering convolutional features. The results of these DDFA experiments together show that DDFA discovers features that are effective as a starting point for gradient descent, with significant improvement over gradient descent alone. Additionally, the method of collecting features in an unsupervised manner allows DDFA to be applied to domains with abundant unlabeled data and relatively sparse labeled data. This ability is highlighted in the STL-10 domain, where DDFA is shown to make effective use of unlabeled data
    corecore