330 research outputs found

    Novelty grammar swarms

    Get PDF
    Tese de mestrado, Engenharia Informática (Sistemas de Informação), Universidade de Lisboa, Faculdade de Ciências, 2015Particle Swarm Optimization (PSO) é um dos métodos de optimização populacionais mais conhecido. Normalmente é aplicado na otimização funções de fitness, que indicam o quão perto o algoritmo está de atingir o objectivo da pesquisa, fazendo com que esta se foque em áreas de fitness mais elevado. Em problemas com muitos ótimos locais, regularmente a pesquisa fica presa em locais com fitness elevado mas que não são o verdadeiro objetivo. Com vista a solucionar este problema em certos domínios, nesta tese é introduzido o Novelty-driven Particle Swarm Optimization (NdPSO). Este algoritmo é inspirado na pesquisa pela novidade (novelty search), um método relativamente recente que guia a pesquisa de forma a encontrar instâncias significativamente diferentes das anteriores. Desta forma, o NdPSO ignora por completo o objetivo perseguindo apenas a novidade, isto torna-o menos susceptivel a ser enganado em problemas com muitos optimos locais. Uma vez que o novelty search mostrou potencial a resolver tarefas no âmbito da programação genética, em particular na evolução gramatical, neste projeto o NdPSO é usado como uma extensão do método de Grammatical Swarm que é uma combinação do PSO com a programação genética. A implementação do NdPSO é testada em três domínios diferentes, representativos daqueles para o qual este algoritmo poderá ser mais vantajoso que os algoritmos guiados pelo objectivo. Isto é, domínios enganadores nos quais seja relativamente intuitivo descrever um comportamento. Em cada um dos domínios testados, o NdPSO supera o aloritmo standard do PSO, uma das suas variantes mais conhecidas (Barebones PSO) e a pesquisa aleatória, mostrando ser uma ferramenta promissora para resolver problemas enganadores. Uma vez que esta é a primeira aplicação da pesquisa por novidade fora do paradigma evolucionário, neste projecto é também efectuado um estudo comparativo do novo algoritmo com a forma mais comum de usar a pesquisa pela novidade (na forma de algoritmo evolucionário).Particle Swarm Optimization (PSO) is a well-known population-based optimization algorithm. Most often it is applied to optimize fitness functions that specify the goal of reaching a desired objective or behavior. As a result, search focuses on higher-fitness areas. In problems with many local optima, search often becomes stuck, and thus can fail to find the intended objective. To remedy this problem in certain kinds of domains, this thesis introduces Novelty-driven Particle Swarm Optimization (NdPSO). Taking motivation from the novelty search algorithm in evolutionary computation, in this method search is driven only towards finding instances significantly different from those found before. In this way, NdPSO completely ignores the objective in its pursuit of novelty, making it less susceptible to deception and local optima. Because novelty search has previously shown potential for solving tasks in Genetic Programming, particularly, in Grammatical Evolution, this paper implements NdPSO as an extension of the Grammatical Swarm method which in effect is a combination of PSO and Genetic Programming.The resulting NdPSO implementation was tested in three different domains representative of those in which it might provide advantage over objective-driven PSO, in particular, those which are deceptive and in which a meaningful high-level description of novel behavior is easy to derive. In each of the tested domains NdPSO outperforms both objective-based PSO and random-search, demonstrating its promise as a tool for solving deceptive problems. Since this is the first application of the search for novelty outside the evolutionary paradigm an empirical comparative study of the new algorithm to a standard novelty search Evolutionary Algorithm is performed

    Complexity Theory for Discrete Black-Box Optimization Heuristics

    Full text link
    A predominant topic in the theory of evolutionary algorithms and, more generally, theory of randomized black-box optimization techniques is running time analysis. Running time analysis aims at understanding the performance of a given heuristic on a given problem by bounding the number of function evaluations that are needed by the heuristic to identify a solution of a desired quality. As in general algorithms theory, this running time perspective is most useful when it is complemented by a meaningful complexity theory that studies the limits of algorithmic solutions. In the context of discrete black-box optimization, several black-box complexity models have been developed to analyze the best possible performance that a black-box optimization algorithm can achieve on a given problem. The models differ in the classes of algorithms to which these lower bounds apply. This way, black-box complexity contributes to a better understanding of how certain algorithmic choices (such as the amount of memory used by a heuristic, its selective pressure, or properties of the strategies that it uses to create new solution candidates) influences performance. In this chapter we review the different black-box complexity models that have been proposed in the literature, survey the bounds that have been obtained for these models, and discuss how the interplay of running time analysis and black-box complexity can inspire new algorithmic solutions to well-researched problems in evolutionary computation. We also discuss in this chapter several interesting open questions for future work.Comment: This survey article is to appear (in a slightly modified form) in the book "Theory of Randomized Search Heuristics in Discrete Search Spaces", which will be published by Springer in 2018. The book is edited by Benjamin Doerr and Frank Neumann. Missing numbers of pointers to other chapters of this book will be added as soon as possibl

    OneMax in Black-Box Models with Several Restrictions

    Full text link
    Black-box complexity studies lower bounds for the efficiency of general-purpose black-box optimization algorithms such as evolutionary algorithms and other search heuristics. Different models exist, each one being designed to analyze a different aspect of typical heuristics such as the memory size or the variation operators in use. While most of the previous works focus on one particular such aspect, we consider in this work how the combination of several algorithmic restrictions influence the black-box complexity. Our testbed are so-called OneMax functions, a classical set of test functions that is intimately related to classic coin-weighing problems and to the board game Mastermind. We analyze in particular the combined memory-restricted ranking-based black-box complexity of OneMax for different memory sizes. While its isolated memory-restricted as well as its ranking-based black-box complexity for bit strings of length nn is only of order n/lognn/\log n, the combined model does not allow for algorithms being faster than linear in nn, as can be seen by standard information-theoretic considerations. We show that this linear bound is indeed asymptotically tight. Similar results are obtained for other memory- and offspring-sizes. Our results also apply to the (Monte Carlo) complexity of OneMax in the recently introduced elitist model, in which only the best-so-far solution can be kept in the memory. Finally, we also provide improved lower bounds for the complexity of OneMax in the regarded models. Our result enlivens the quest for natural evolutionary algorithms optimizing OneMax in o(nlogn)o(n \log n) iterations.Comment: This is the full version of a paper accepted to GECCO 201
    corecore