17 research outputs found

    Web-Based Evolutionary and Adaptive Information Retrieval

    No full text

    Excluding Fitness Helps Improve Robustness of Evolutionary Algorithms

    No full text

    A New Schema Survival and Construction Theory for One-Point Crossover

    No full text

    Novelty-based Fitness: An Evaluation under the Santa Fe Trail ∗

    No full text
    We present an empirical analysis of the effects of incorporating noveltybased fitness (phenotypic behavioral diversity) into Genetic Programming with respect to training, test and generalization performance. Three novelty-based approaches are considered: novelty comparison against a finite archive of behavioral archetypes, novelty comparison against all previously seen behaviors, and a simple linear combination of the first method with a standard fitness measure. Performance is evaluated on the Santa Fe Trail, a well known GP benchmark selected for its deceptiveness and established generalization test procedures. Results are compared to a standard quality-based fitness function (count of food eaten). Ultimately, the quality style objective provided better overall performance, however, solutions identified under novelty based fitness functions generally provided much better test performance than their corresponding training performance. This is interpreted as representing a requirement for layered learning / symbiosis when assuming novelty based fitness functions in order to more quickly achieve the integration of diverse behaviors into a single cohesive strategy.

    The Tree-String Problem: An Artificial Domain for Structure and Content Search

    No full text
    This paper introduces the Tree-String problem for genetic programming and related search and optimisation methods. To improve the understanding of optimisation and search methods, we aim to capture the complex dynamic created by the interdependencies of solution structure and content. Thus, we created an artificial domain that is amenable for analysis, yet representative of a wide-range of real-world applications

    Open issues in genetic programming

    Get PDF
    It is approximately 50 years since the first computational experiments were conducted in what has become known today as the field of Genetic Programming (GP), twenty years since John Koza named and popularised the method, and ten years since the first issue appeared of the Genetic Programming & Evolvable Machines journal. In particular, during the past two decades there has been a significant range and volume of development in the theory and application of GP, and in recent years the field has become increasingly applied. There remain a number of significant open issues despite the successful application of GP to a number of challenging real-world problem domains and progress in the develop- ment of a theory explaining the behavior and dynamics of GP. These issues must be addressed for GP to realise its full potential and to become a trusted mainstream member of the computational problem solving toolkit. In this paper we outline some of the challenges and open issues that face researchers and practitioners of GP. We hope this overview will stimulate debate, focus the direction of future research to deepen our understanding of GP, and further the development of more powerful problem solving algorithms.Science Foundation IrelandEmbargo until April 2011 - AV April 2011 ke - AS 04/11/2010 ab - TS 18.11.1

    Tackling overfitting in evolutionary-driven financial model induction

    No full text
    This chapter explores the issue of overfitting in grammar-based Genetic Programming. Tools such as Genetic Programming are well suited to problems in finance where we seek to learn or induce a model from data. Models that overfit the data upon which they are trained prevent model generalisation, which is an important goal of learning algorithms. Early stopping is a technique that is frequently used to counteract overfitting, but this technique often fails to identify the optimal point at which to stop training. In this chapter, we implement four classes of stopping criteria, which attempt to stop training when the generalisation of the evolved model is maximised. We show promising results using, in particular, one novel class of criteria, which measured the correlation between the training and validation fitness at each generation. These criteria determined whether or not to stop training depending on the measurement of this correlation - they had a high probability of being the best among a suite of potential criteria to be used during a run. This meant that they often found the lowest validation set error for the entire run faster than other criteria.Science Foundation IrelandEmbargo until 14/10/2012 - based on SpringerLink date - OR 6/7/12 12M embargo - release after Nov 2012 - AV 24/01/2012; au, ti, ke, ab, de - TS 02.02.1
    corecore