3,496 research outputs found

    Semantic variation operators for multidimensional genetic programming

    Full text link
    Multidimensional genetic programming represents candidate solutions as sets of programs, and thereby provides an interesting framework for exploiting building block identification. Towards this goal, we investigate the use of machine learning as a way to bias which components of programs are promoted, and propose two semantic operators to choose where useful building blocks are placed during crossover. A forward stagewise crossover operator we propose leads to significant improvements on a set of regression problems, and produces state-of-the-art results in a large benchmark study. We discuss this architecture and others in terms of their propensity for allowing heuristic search to utilize information during the evolutionary process. Finally, we look at the collinearity and complexity of the data representations that result from these architectures, with a view towards disentangling factors of variation in application.Comment: 9 pages, 8 figures, GECCO 201

    Unveiling evolutionary algorithm representation with DU maps

    Get PDF
    Evolutionary algorithms (EAs) have proven to be effective in tackling problems in many different domains. However, users are often required to spend a significant amount of effort in fine-tuning the EA parameters in order to make the algorithm work. In principle, visualization tools may be of great help in this laborious task, but current visualization tools are either EA-specific, and hence hardly available to all users, or too general to convey detailed information. In this work, we study the Diversity and Usage map (DU map), a compact visualization for analyzing a key component of every EA, the representation of solutions. In a single heat map, the DU map visualizes for entire runs how diverse the genotype is across the population and to which degree each gene in the genotype contributes to the solution. We demonstrate the generality of the DU map concept by applying it to six EAs that use different representations (bit and integer strings, trees, ensembles of trees, and neural networks). We present the results of an online user study about the usability of the DU map which confirm the suitability of the proposed tool and provide important insights on our design choices. By providing a visualization tool that can be easily tailored by specifying the diversity (D) and usage (U) functions, the DU map aims at being a powerful analysis tool for EAs practitioners, making EAs more transparent and hence lowering the barrier for their use

    Digital Ecosystems: Ecosystem-Oriented Architectures

    Full text link
    We view Digital Ecosystems to be the digital counterparts of biological ecosystems. Here, we are concerned with the creation of these Digital Ecosystems, exploiting the self-organising properties of biological ecosystems to evolve high-level software applications. Therefore, we created the Digital Ecosystem, a novel optimisation technique inspired by biological ecosystems, where the optimisation works at two levels: a first optimisation, migration of agents which are distributed in a decentralised peer-to-peer network, operating continuously in time; this process feeds a second optimisation based on evolutionary computing that operates locally on single peers and is aimed at finding solutions to satisfy locally relevant constraints. The Digital Ecosystem was then measured experimentally through simulations, with measures originating from theoretical ecology, evaluating its likeness to biological ecosystems. This included its responsiveness to requests for applications from the user base, as a measure of the ecological succession (ecosystem maturity). Overall, we have advanced the understanding of Digital Ecosystems, creating Ecosystem-Oriented Architectures where the word ecosystem is more than just a metaphor.Comment: 39 pages, 26 figures, journa

    Towards The Deep Semantic Learning Machine Neuroevolution Algorithm: An exploration on the CIFAR-10 problem task

    Get PDF
    Dissertation presented as the partial requirement for obtaining a Master's degree in Data Science and Advanced AnalyticsSelecting the topology and parameters of Convolutional Neural Network (CNN) for a given supervised machine learning task is a non-trivial problem. The Deep Semantic Learning Machine (Deep-SLM) deals with this problem by automatically constructing CNNs without the use of the Backpropagation algorithm. The Deep-SLM is a novel neuroevolution technique and functions as stochastic semantic hill-climbing algorithm searching over the space of CNN topologies and parameters. The geometric semantic properties of the Deep-SLM induce a unimodel error space and eliminate the existence of local optimal solutions. This makes the Deep-SLM potentially favorable in terms of search efficiency and effectiveness. This thesis provides an exploration of a variant of the Deep-SLM algorithm on the CIFAR-10 problem task, and a validation of its proof of concept. This specific variant only forms mutation node ! mutation node connections in the non-convolutional part of the constructed CNNs. Furthermore, a comparative study between the Deep-SLM and the Semantic Learning Machine (SLM) algorithms was conducted. It was observed that sparse connections can be an effective way to prevent overfitting. Additionally, it was shown that a single 2D convolution layer initialized with random weights does not result in well-generalizing features for the Deep-SLM directly, but, in combination with a 2D max-pooling down sampling layer, effective improvements in performance and generalization of the Deep-SLM could be achieved. These results constitute to the hypothesis that convolution and pooling layers can improve performance and generalization of the Deep-SLM, unless the components are properly optimized.Selecionar a topologia e os parâmetros da Rede Neural Convolucional (CNN) para uma tarefa de aprendizado automático supervisionada não é um problema trivial. A Deep Semantic Learning Machine (Deep-SLM) lida com este problema construindo automaticamente CNNs sem recorrer ao uso do algoritmo de Retro-propagação. A Deep-SLM é uma nova técnica de neuroevolução que funciona enquanto um algoritmo de escalada estocástico semântico na pesquisa de topologias e de parâmetros CNN. As propriedades geométrico-semânticas da Deep-SLM induzem um unimodel error space que elimina a existência de soluções ótimas locais, favorecendo, potencialmente, a Deep-SLM em termos de eficiência e eficácia. Esta tese providencia uma exploração de uma variante do algoritmo da Deep-SLM no problemo de CIFAR-10, assim como uma validação do seu conceito de prova. Esta variante específica apenas forma conexões nó de mutação!nó de mutação na parte non convolucional da CNN construída. Mais ainda, foi conduzido um estudo comparativo entre a Deep-SLM e o algoritmo da Semantic Learning Machine (SLM). Tendo sido observado que as conexões esparsas poderão tratar-se de uma forma eficiente de prevenir o overfitting. Adicionalmente, mostrou-se que uma singular camada de convolução 2D, iniciada com valores aleatórios, não resulta, directamente, em características generalizadas para a Deep-SLM, mas, em combinação com uma camada de 2D max-pooling, melhorias efectivas na performance e na generalização da Deep-SLM poderão ser concretizadas. Estes resultados constituem, assim, a hipótese de que as camadas de convolução e pooling poderão melhorar a performance e a generalização da Deep-SLM, a não ser que os componentes sejam adequadamente otimizados

    Deep Semantic Learning Machine Initial design and experiments

    Get PDF
    Dissertation presented as the partial requirement for obtaining a Master's degree in Data Science and Advanced AnalyticsComputer vision is an interdisciplinary scientific field that allows the digital world to interact with the real world. It is one of the fastest-growing and most important areas of data science. Applications are endless, given various tasks that can be solved thanks to the advances in the computer vision field. Examples of types of tasks that can be solved thanks to computer vision models are: image analysis, object detection, image transformation, and image generation. Having that many applications is vital for providing models with the best possible performance. Although many years have passed since backpropagation was invented, it is still the most commonly used approach of training neural networks. A satisfactory performance can be achieved with this approach, but is it the best it can get? A fixed topology of a neural network that needs to be defined before any training begins seems to be a significant limitation as the performance of a network is highly dependent on the topology. Since there are no studies that would precisely guide scientists on selecting a proper network structure, the ability to adjust a topology to a problem seems highly promising. Initial ideas of the evolution of neural networks that involve heuristic search methods have provided encouragingly good results for the various reinforcement learning task. This thesis presents the initial experiments on the usage of a similar approach to solve image classification tasks. The new model called Deep Semantic Learning Machine is introduced with a new mutation method specially designed to solve computer vision problems. Deep Semantic Learning Machine allows a topology to evolve from a small network and adjust to a given problem. The initial results are pretty promising, especially in a training dataset. However, in this thesis Deep Semantic Learning Machine was developed only as proof of a concept and further improvements to the approach can be made

    The influence of population size in geometric semantic GP

    Get PDF
    In this work, we study the influence of the population size on the learning ability of Geometric Semantic Genetic Programming for the task of symbolic regression. A large set of experiments, considering different population size values on different regression problems, has been performed. Results show that, on real-life problems, having small populations results in a better training fitness with respect to the use of large populations after the same number of fitness evaluations. However, performance on the test instances varies among the different problems: in datasets with a high number of features, models obtained with large populations present a better performance on unseen data, while in datasets characterized by a relative small number of variables a better generalization ability is achieved by using small population size values. When synthetic problems are taken into account, large population size values represent the best option for achieving good quality solutions on both training and test instances

    Genetic Programming is Naturally Suited to Evolve Bagging Ensembles

    Get PDF
    Learning ensembles by bagging can substantially improve the generalization performance of low-bias, high-variance estimators, including those evolved by Genetic Programming (GP). To be efficient, modern GP algorithms for evolving (bagging) ensembles typically rely on several (often inter-connected) mechanisms and respective hyper-parameters, ultimately compromising ease of use. In this paper, we provide experimental evidence that such complexity might not be warranted. We show that minor changes to fitness evaluation and selection are sufficient to make a simple and otherwise-traditional GP algorithm evolve ensembles efficiently. The key to our proposal is to exploit the way bagging works to compute, for each individual in the population, multiple fitness values (instead of one) at a cost that is only marginally higher than the one of a normal fitness evaluation. Experimental comparisons on classification and regression tasks taken and reproduced from prior studies show that our algorithm fares very well against state-of-the-art ensemble and non-ensemble GP algorithms. We further provide insights into the proposed approach by (i) scaling the ensemble size, (ii) ablating the changes to selection, (iii) observing the evolvability induced by traditional subtree variation. Code: https://github.com/marcovirgolin/2SEGP.Comment: Added interquartile range in tables 1, 2, and 3; improved Fig. 3 and its analysis, improved experiment design of section 7.

    Improving Tree-based Pipeline Optimization Tool with Geometric Semantic Genetic Programming

    Get PDF
    Dissertation presented as the partial requirement for obtaining a Master's degree in Data Science and Advanced Analytics, specialization in Data ScienceMachine Learning (ML) is becoming part of our lives, from face recognition to sensors of the latest cars. However, the construction of its pipelines is a time-consuming and expensive process, even for experts that have the knowledge in ML algorithms, due to the several options for each step. To overcome this issue, Automated ML (AutoML) was introduced, automating some steps of this process. One of its recent algorithms is Tree-Based Pipeline Optimization Tool (TPOT), an Evolutionary Algorithm (EA) that automatically designs and optimizes ML pipelines using Genetic Programming (GP). Another recent algorithm is Geometric Semantic Genetic Programming (GSGP), an EA characterized by using the semantics, the vector of outputs of a program on the different training data, and by searching directly in the space of semantics of the program through geometric semantic operators, leading to a unimodal fitness landscape. In this work, a new version of TPOT was created, called TPOT-GSGP, where GSGP is one of the options for model selection. This new algorithm was implemented in Python, only for regression problems and using Negative Mean Absolute Error as measurement error. Five case studies were used to compare the performance of three algorithms: TPOT-GSGP, the original TPOT, and GSGP. Additionally, the statistical significance of the difference on the last generation’s score for each combination of two algorithms was checked with Wilcoxon tests. There was not a single algorithm that outperformed the others in all datasets, sometimes it was TPOT-GSGP and others TPOT, depending on the case study and on the score that was analysed (learning or test). It was concluded that every time GSGP is chosen as root 50% of the times or more, TPOT-GSGP outperformed TPOT on the test set. Therefore, the advantages of this new algorithm can be extraordinary with its development and adjustment in future work

    Studying elements ofgenetic programming for multiclass classification

    Get PDF
    Tese de mestrado, Engenharia Informática (Interação e Conhecimento) Universidade de Lisboa, Faculdade de Ciências, 2018Although Genetic Programming (GP) has been very successful in both symbolic regression and binary classification by solving many difficult problems from various domains, it requires improvements in multiclass classification, which due to the high complexity of this kind of problems, requires specialized classifiers. In this project, we explored a multiclass classification GP-based algorithm, the M3GP [4]. The individuals in standard GP only have one node at their root. This means that their output space is in R. Unlike standard GP, M3GP allows each individual to have n nodes at its root. This variation changes the output space to Rn, allowing them to construct clusters of samples and use a cluster-based classification. Although M3GP is capable of creating interpretable models while having competitive results with state-of-the-art classifiers, such as Random Forests and Neural Networks, it has downsides. The focus of this project is to improve the algorithm by exploring two components, the fitness function, and the genetic operators’ selection method. The original fitness function was accuracy-based. Since using this kind of functions does not allow a smooth evolution of the output space, we tried to improve the algorithm by exploring two distance-based fitness functions as an attempt to separate the clusters while bringing the samples closer to their respective centroids. Until now, the genetic operators in M3GP were selected with a fixed probability. Since some operators have a better effect on the fitness at different stages of the evolution, the fixed probabilities allow operators to be selected at the wrong stages of the evolution, slowing down the learning process. In this project, we try to evolve the probability the genetic operators have of being chosen over the generations. On a later stage, we proposed a new crossover genetic operator that uses three individuals for the M3GP algorithm. The results obtained show significantly better results in the training set in half the datasets, while improving the test accuracy in two datasets
    • …
    corecore