986 research outputs found

    Novelty grammar swarms

    Get PDF
    Tese de mestrado, Engenharia Informática (Sistemas de Informação), Universidade de Lisboa, Faculdade de Ciências, 2015Particle Swarm Optimization (PSO) é um dos métodos de optimização populacionais mais conhecido. Normalmente é aplicado na otimização funções de fitness, que indicam o quão perto o algoritmo está de atingir o objectivo da pesquisa, fazendo com que esta se foque em áreas de fitness mais elevado. Em problemas com muitos ótimos locais, regularmente a pesquisa fica presa em locais com fitness elevado mas que não são o verdadeiro objetivo. Com vista a solucionar este problema em certos domínios, nesta tese é introduzido o Novelty-driven Particle Swarm Optimization (NdPSO). Este algoritmo é inspirado na pesquisa pela novidade (novelty search), um método relativamente recente que guia a pesquisa de forma a encontrar instâncias significativamente diferentes das anteriores. Desta forma, o NdPSO ignora por completo o objetivo perseguindo apenas a novidade, isto torna-o menos susceptivel a ser enganado em problemas com muitos optimos locais. Uma vez que o novelty search mostrou potencial a resolver tarefas no âmbito da programação genética, em particular na evolução gramatical, neste projeto o NdPSO é usado como uma extensão do método de Grammatical Swarm que é uma combinação do PSO com a programação genética. A implementação do NdPSO é testada em três domínios diferentes, representativos daqueles para o qual este algoritmo poderá ser mais vantajoso que os algoritmos guiados pelo objectivo. Isto é, domínios enganadores nos quais seja relativamente intuitivo descrever um comportamento. Em cada um dos domínios testados, o NdPSO supera o aloritmo standard do PSO, uma das suas variantes mais conhecidas (Barebones PSO) e a pesquisa aleatória, mostrando ser uma ferramenta promissora para resolver problemas enganadores. Uma vez que esta é a primeira aplicação da pesquisa por novidade fora do paradigma evolucionário, neste projecto é também efectuado um estudo comparativo do novo algoritmo com a forma mais comum de usar a pesquisa pela novidade (na forma de algoritmo evolucionário).Particle Swarm Optimization (PSO) is a well-known population-based optimization algorithm. Most often it is applied to optimize fitness functions that specify the goal of reaching a desired objective or behavior. As a result, search focuses on higher-fitness areas. In problems with many local optima, search often becomes stuck, and thus can fail to find the intended objective. To remedy this problem in certain kinds of domains, this thesis introduces Novelty-driven Particle Swarm Optimization (NdPSO). Taking motivation from the novelty search algorithm in evolutionary computation, in this method search is driven only towards finding instances significantly different from those found before. In this way, NdPSO completely ignores the objective in its pursuit of novelty, making it less susceptible to deception and local optima. Because novelty search has previously shown potential for solving tasks in Genetic Programming, particularly, in Grammatical Evolution, this paper implements NdPSO as an extension of the Grammatical Swarm method which in effect is a combination of PSO and Genetic Programming.The resulting NdPSO implementation was tested in three different domains representative of those in which it might provide advantage over objective-driven PSO, in particular, those which are deceptive and in which a meaningful high-level description of novel behavior is easy to derive. In each of the tested domains NdPSO outperforms both objective-based PSO and random-search, demonstrating its promise as a tool for solving deceptive problems. Since this is the first application of the search for novelty outside the evolutionary paradigm an empirical comparative study of the new algorithm to a standard novelty search Evolutionary Algorithm is performed

    Novelty-driven Particle Swarm Optimization

    Get PDF

    Continuous Schemes for Program Evolution

    Get PDF

    Evolutionary algorithms for financial trading

    Get PDF
    Genetic programming (GP) is increasingly popular as a research tool for applications in finance and economics. One thread in this area is the use of GP to discover effective technical trading rules. In a seminal article, Allen & Karjalainen (1999) used GP to find rules that were profitable, but were nevertheless outperformed by the simple “buy and hold” trading strategy. Many succeeding attempts have reported similar findings. This represents a clear example of a significant open issue in the field of GP, namely, generalization in GP [78]. The issue of generalisation is that GP solutions may not be general enough, resulting in poor performance on unseen data. There are a small handful of cases in which such work has managed to find rules that outperform buyand- hold, but these have tended to be difficult to replicate. Among previous studies, work by Becker & Seshadri (2003) was the most promising one, which showed outperformance of buy-and-hold. In turn, Becker & Seshadri’s work had made several modifications to Allen & Karjalainen’s work, including the adoption of monthly rather than daily trading. This thesis provides a replicable account of Becker & Seshadri’s study, and also shows how further modifications enabled fairly reliable outperformance of buy-and-hold, including the use of a train/test/validate methodology [41] to evolve trading rules with good properties of generalization, and the use of a dynamic form of GP [109] to improve the performance of the algorithm in dynamic environments like financial markets. In addition, we investigate and compare each of daily, weekly and monthly trading; we find that outperformance of buy-and-hold can be achieved even for daily trading, but as we move from monthly to daily trading the performance of evolved rules becomes increasingly dependent on prevailing market conditions. This has clarified that robust outperformance of B&H depends on, mainly, the adoption of a relatively infrequent trading strategy (e.g. monthly), as well as a range of factors that amount to sound engineering of the GP grammar and the validation strategy. Moreover, v we also add a comprehensive study of multiobjective approaches to this investigation with assumption from that, and find that multiobjective strategies provide even more robustness in outperforming B&H, even in the context of more frequent (e.g. weekly) trading decisions. Last, inspired by a number of beneficial aspects of grammatical evolution (GE) and reports on the successful performance of various kinds of its applications, we introduce new approach for (GE) with a new suite of operators resulting in an improvement on GE search compared with standard GE. An empirical test of this new GE approach on various kind of test problems, including financial trading, is provided in this thesis as well

    Towards the Conceptualization of Refinement Typed Genetic Programming

    Get PDF
    Tese de mestrado, Engenharia Informática (Engenharia de Software) Universidade de Lisboa, Faculdade de Ciências, 2020The Genetic Programming (GP) approaches typically have difficulties dealing with the large search space as the number of language components grows. The increasing number of components leads to amore extensive search space and lengthens the time required to find a fitting solution. Strongly Typed Genetic Programming (STGP) tries to reduce the search space using the programming language type system, only allowing typesafe programs to be generated. Grammar Guided Genetic Programming (GGGP) allows the user to specify the program’s structure through grammar, reducing the number of combinations between the language components. However, the STGP restriction of the search space is still not capable of holding the increasing number of synthesis components, and the GGGP approach is arguably usable since it requires the user to create not only a parser and interpreter for the generated expressions from the grammar, but also all the functions existing in the grammar. This work proposes Refinement Typed Genetic Programming (RTGP), a hybrid approach between STGP and RTGP, which uses refinement types to reduce the search space while maintaining the language usability properties. This work introduces the ÆON programming language, which allows the partial or total synthesis of refinement typed programs using genetic programming. The potential of RTGP is presented with the usability arguments on two use cases against GGGP and the creation of a prototype propertybased verification tool, pyCheck, proof of RTGPs components versatility

    Analytical Programming - a Novel Approach for Evolutionary Synthesis of Symbolic Structures

    Get PDF
    This chapter discusses an alternative approach for symbolic structures and solutions synthesis and demonstrates a comparison with other methods, for example Genetic Programming (GP) or Grammatical Evolution (GE). Generally, there are two well known methods, which can be used for symbolic structures synthesis by means of computers. The first one is called GP and the other is GE. Another interesting research was carried out by Artificial Immune Systems (AIS) or/and systems, which do not use tree structures like linear GP and other similar algorithm like Multi Expression Programming (MEP), etc. In this chapter, a different method called Analytic Programming (AP), is presented. AP is a grammar free algorithmic superstructure, which can be used by any programming language and also by any arbitrary Evolutionary Algorithm (EA) or another class of numerical optimization method. This chapter describes not only theoretical principles of AP, but also its comparative study with selected well known case examples from GP as well as applications on synthesis of: controller, systems of deterministic chaos, electronics circuits, etc. For simulation purposes, AP has been co-joined with EA’s like Differential Evolution (DE), Self-Organising Migrating Algorithm (SOMA), Genetic Algorithms (GA) and Simulated Annealing (SA). All case studies has been carefully prepared and repeated in order to get valid statistical data for proper conclusions.P(ED2.1.00/03.0089), P(GA102/09/1680), S, Z(MSM7088352101

    Establishing Mechanisms for Self-Adaptation in Genetic Programming

    Get PDF
    It has long been a desire of computer scientists to develop a computer system that is able to learn and improve without being explicitly programmed to do so. The idea of software that is able to analyse, update and alter itself has been discussed. The thesis is structured as follows: Firstly, we refine and improve the Tartarus problem, proposing it as a benchmark problem for use in GP. Secondly, we establish a mechanism for incorporating self-adaptation into a GP system in order to increase the performance of candidate solutions. Finally, we explore the impact of a fitness bias, inspired by the Dunning-Kruger effect, on the robustness of a GP system. The on-the-fly adaptation of parameter values at runtime can lead to improvements in performance.Self-adaptation aims at biasing the distribution of individuals in a population towards more appropriate and effective areas of the search space. Therefore, we propose, outline and evaluate a novel self-adaptive mechanism favouring a continuous opportunity for modifications to be made during an execution, as-and-when they are deemed to be appropriate. This creates a more flexible parameter modification approach, leading to an increase in solution performance: leading to an approximate 15% and a 10% increase for the Tartarus and Santa-Fe problems respectively. Robustness is often referred to as a characteristic of a candidate solution whose performance is not diminished despite perturbations in environmental parameters or constraints. A solution that does not lose utility or performance quality under these changes is said to be robust. The Dunning-Kruger Effect (DK) is a form of cognitive bias observed in populations, first described by psychologists Dunning and Kruger in 1999: individuals with a low level of ability mistakenly over-estimate their performance and conversely, individuals with a high level of ability will often under-estimate their performance. We propose that the introduction of a DK style bias into the fitness distribution of the population will enable a system to maintain a higher level of population diversity over time

    Utilising restricted for-loops in genetic programming

    Get PDF
    Genetic programming is an approach that utilises the power of evolution to allow computers to evolve programs. While loops are natural components of most programming languages and appear in every reasonably-sized application, they are rarely used in genetic programming. The work is to investigate a number of restricted looping constructs to determine whether any significant benefits can be obtained in genetic programming. Possible benefits include: Solving problems which cannot be solved without loops, evolving smaller sized solutions which can be more easily understood by human programmers and solving existing problems quicker by using fewer evaluations. In this thesis, a number of explicit restricted loop formats were formulated and tested on the Santa Fe ant problem, a modified ant problem, a sorting problem, a visit-every-square problem and a difficult object classificat ion problem. The experimental results showed that these explicit loops can be successfully used in genetic programming. The evolutionary process can decide when, where and how to use them. Runs with these loops tended to generate smaller sized solutions in fewer evaluations. Solutions with loops were found to some problems that could not be solved without loops. The results and analysis of this thesis have established that there are significant benefits in using loops in genetic programming. Restricted loops can avoid the difficulties of evolving consistent programs and the infinite iterations problem. Researchers and other users of genetic programming should not be afraid of loops
    corecore