16 research outputs found

    Using particle swarm optimization to evolve two-player game agents

    Get PDF
    Computer game-playing agents are almost as old as computers themselves, and people have been developing agents since the 1950's. Unfortunately the techniques for game-playing agents have remained basically the same for almost half a century -- an eternity in computer time. Recently developed approaches have shown that it is possible to develop game playing agents with the help of learning algorithms. This study is based on the concept of algorithms that learn how to play board games from zero initial knowledge about playing strategies. A coevolutionary approach, where a neural network is used to assess desirability of leaf nodes in a game tree, and evolutionary algorithms are used to train neural networks in competition, is overviewed. This thesis then presents an alternative approach in which particle swarm optimization (PSO) is used to train the neural networks. Different variations of the PSO are implemented and compared. The results of the PSO approaches are also compared with that of an evolutionary programming approach. The performance of the PSO algorithms is investigated for different values of the PSO control parameters. This study shows that the PSO approach can be applied successfully to train game-playing agents.Dissertation (MSc)--University of Pretoria, 2007.Computer ScienceUnrestricte

    Unifying a Geometric Framework of Evolutionary Algorithms and Elementary Landscapes Theory

    Get PDF
    Evolutionary algorithms (EAs) are randomised general-purpose strategies, inspired by natural evolution, often used for finding (near) optimal solutions to problems in combinatorial optimisation. Over the last 50 years, many theoretical approaches in evolutionary computation have been developed to analyse the performance of EAs, design EAs or measure problem difficulty via fitness landscape analysis. An open challenge is to formally explain why a general class of EAs perform better, or worse, than others on a class of combinatorial problems across representations. However, the lack of a general unified theory of EAs and fitness landscapes, across problems and representations, makes it harder to characterise pairs of general classes of EAs and combinatorial problems where good performance can be guaranteed provably. This thesis explores a unification between a geometric framework of EAs and elementary landscapes theory, not tied to a specific representation nor problem, with complementary strengths in the analysis of population-based EAs and combinatorial landscapes. This unification organises around three essential aspects: search space structure induced by crossovers, search behaviour of population-based EAs and structure of fitness landscapes. First, this thesis builds a crossover classification to systematically compare crossovers in the geometric framework and elementary landscapes theory, revealing a shared general subclass of crossovers: geometric recombination P-structures, which covers well-known crossovers. The crossover classification is then extended to a general framework for axiomatically analysing the population behaviour induced by crossover classes on associated EAs. This shows the shared general class of all EAs using geometric recombination P-structures, but no mutation, always do the same abstract form of convex evolutionary search. Finally, this thesis characterises a class of globally convex combinatorial landscapes shared by the geometric framework and elementary landscapes theory: abstract convex elementary landscapes. It is formally explained why geometric recombination P-structure EAs expectedly can outperform random search on abstract convex elementary landscapes related to low-order graph Laplacian eigenvalues. Altogether, this thesis paves a way towards a general unified theory of EAs and combinatorial fitness landscapes

    A case study of controlling crossover in a selection hyper-heuristic framework using the multidimensional knapsack problem

    Get PDF
    Hyper-heuristics are high-level methodologies for solving complex problems that operate on a search space of heuristics. In a selection hyper-heuristic framework, a heuristic is chosen from an existing set of low-level heuristics and applied to the current solution to produce a new solution at each point in the search. The use of crossover low-level heuristics is possible in an increasing number of general-purpose hyper-heuristic tools such as HyFlex and Hyperion. However, little work has been undertaken to assess how best to utilise it. Since a single-point search hyper-heuristic operates on a single candidate solution, and two candidate solutions are required for crossover, a mechanism is required to control the choice of the other solution. The frameworks we propose maintain a list of potential solutions for use in crossover. We investigate the use of such lists at two conceptual levels. First, crossover is controlled at the hyper-heuristic level where no problem-specific information is required. Second, it is controlled at the problem domain level where problem-specific information is used to produce good-quality solutions to use in crossover. A number of selection hyper-heuristics are compared using these frameworks over three benchmark libraries with varying properties for an NP-hard optimisation problem: the multidimensional 0-1 knapsack problem. It is shown that allowing crossover to be managed at the domain level outperforms managing crossover at the hyper-heuristic level in this problem domain. © 2016 Massachusetts Institute of Technolog

    Angle modulated population based algorithms to solve binary problems

    Get PDF
    Recently, continuous-valued optimization problems have received a great amount of focus, resulting in optimization algorithms which are very efficient within the continuous-valued space. Many optimization problems are, however, defined within the binary-valued problem space. These continuous-valued optimization algorithms can not operate directly on a binary-valued problem representation, without algorithm adaptations because the mathematics used within these algorithms generally fails within a binary problem space. Unfortunately, such adaptations may alter the behavior of the algorithm, potentially degrading the performance of the original continuous-valued optimization algorithm. Additionally, binary representations present complications with respect to increasing problem dimensionality, interdependencies between dimensions, and a loss of precision. This research investigates the possibility of applying continuous-valued optimization algorithms to solve binary-valued problems, without requiring algorithm adaptation. This is achieved through the application of a mapping technique, known as angle modulation. Angle modulation effectively addresses most of the problems associated with the use of a binary representation by abstracting a binary problem into a four-dimensional continuous-valued space, from which a binary solution is then obtained. The abstraction is obtained as a bit-generating function produced by a continuous-valued algorithm. A binary solution is then obtained by sampling the bit-generating function. This thesis proposes a number of population-based angle-modulated continuous-valued algorithms to solve binary-valued problems. These algorithms are then compared to binary algorithm counterparts, using a suite of benchmark functions. Empirical analysis will show that the angle-modulated continuous-valued algorithms are viable alternatives to binary optimization algorithms. Copyright 2012, University of Pretoria. All rights reserved. The copyright in this work vests in the University of Pretoria. No part of this work may be reproduced or transmitted in any form or by any means, without the prior written permission of the University of Pretoria. Please cite as follows: Pamparà, G 2012, Angle modulated population based algorithms to solve binary problems, MSc dissertation, University of Pretoria, Pretoria, viewed yymmdd C12/4/188/gmDissertation (MSc)--University of Pretoria, 2012.Computer Scienceunrestricte

    A Field Guide to Genetic Programming

    Get PDF
    xiv, 233 p. : il. ; 23 cm.Libro ElectrónicoA Field Guide to Genetic Programming (ISBN 978-1-4092-0073-4) is an introduction to genetic programming (GP). GP is a systematic, domain-independent method for getting computers to solve problems automatically starting from a high-level statement of what needs to be done. Using ideas from natural evolution, GP starts from an ooze of random computer programs, and progressively refines them through processes of mutation and sexual recombination, until solutions emerge. All this without the user having to know or specify the form or structure of solutions in advance. GP has generated a plethora of human-competitive results and applications, including novel scientific discoveries and patentable inventions. The authorsIntroduction -- Representation, initialisation and operators in Tree-based GP -- Getting ready to run genetic programming -- Example genetic programming run -- Alternative initialisations and operators in Tree-based GP -- Modular, grammatical and developmental Tree-based GP -- Linear and graph genetic programming -- Probalistic genetic programming -- Multi-objective genetic programming -- Fast and distributed genetic programming -- GP theory and its applications -- Applications -- Troubleshooting GP -- Conclusions.Contents xi 1 Introduction 1.1 Genetic Programming in a Nutshell 1.2 Getting Started 1.3 Prerequisites 1.4 Overview of this Field Guide I Basics 2 Representation, Initialisation and GP 2.1 Representation 2.2 Initialising the Population 2.3 Selection 2.4 Recombination and Mutation Operators in Tree-based 3 Getting Ready to Run Genetic Programming 19 3.1 Step 1: Terminal Set 19 3.2 Step 2: Function Set 20 3.2.1 Closure 21 3.2.2 Sufficiency 23 3.2.3 Evolving Structures other than Programs 23 3.3 Step 3: Fitness Function 24 3.4 Step 4: GP Parameters 26 3.5 Step 5: Termination and solution designation 27 4 Example Genetic Programming Run 4.1 Preparatory Steps 29 4.2 Step-by-Step Sample Run 31 4.2.1 Initialisation 31 4.2.2 Fitness Evaluation Selection, Crossover and Mutation Termination and Solution Designation Advanced Genetic Programming 5 Alternative Initialisations and Operators in 5.1 Constructing the Initial Population 5.1.1 Uniform Initialisation 5.1.2 Initialisation may Affect Bloat 5.1.3 Seeding 5.2 GP Mutation 5.2.1 Is Mutation Necessary? 5.2.2 Mutation Cookbook 5.3 GP Crossover 5.4 Other Techniques 32 5.5 Tree-based GP 39 6 Modular, Grammatical and Developmental Tree-based GP 47 6.1 Evolving Modular and Hierarchical Structures 47 6.1.1 Automatically Defined Functions 48 6.1.2 Program Architecture and Architecture-Altering 50 6.2 Constraining Structures 51 6.2.1 Enforcing Particular Structures 52 6.2.2 Strongly Typed GP 52 6.2.3 Grammar-based Constraints 53 6.2.4 Constraints and Bias 55 6.3 Developmental Genetic Programming 57 6.4 Strongly Typed Autoconstructive GP with PushGP 59 7 Linear and Graph Genetic Programming 61 7.1 Linear Genetic Programming 61 7.1.1 Motivations 61 7.1.2 Linear GP Representations 62 7.1.3 Linear GP Operators 64 7.2 Graph-Based Genetic Programming 65 7.2.1 Parallel Distributed GP (PDGP) 65 7.2.2 PADO 67 7.2.3 Cartesian GP 67 7.2.4 Evolving Parallel Programs using Indirect Encodings 68 8 Probabilistic Genetic Programming 8.1 Estimation of Distribution Algorithms 69 8.2 Pure EDA GP 71 8.3 Mixing Grammars and Probabilities 74 9 Multi-objective Genetic Programming 75 9.1 Combining Multiple Objectives into a Scalar Fitness Function 75 9.2 Keeping the Objectives Separate 76 9.2.1 Multi-objective Bloat and Complexity Control 77 9.2.2 Other Objectives 78 9.2.3 Non-Pareto Criteria 80 9.3 Multiple Objectives via Dynamic and Staged Fitness Functions 80 9.4 Multi-objective Optimisation via Operator Bias 81 10 Fast and Distributed Genetic Programming 83 10.1 Reducing Fitness Evaluations/Increasing their Effectiveness 83 10.2 Reducing Cost of Fitness with Caches 86 10.3 Parallel and Distributed GP are Not Equivalent 88 10.4 Running GP on Parallel Hardware 89 10.4.1 Master–slave GP 89 10.4.2 GP Running on GPUs 90 10.4.3 GP on FPGAs 92 10.4.4 Sub-machine-code GP 93 10.5 Geographically Distributed GP 93 11 GP Theory and its Applications 97 11.1 Mathematical Models 98 11.2 Search Spaces 99 11.3 Bloat 101 11.3.1 Bloat in Theory 101 11.3.2 Bloat Control in Practice 104 III Practical Genetic Programming 12 Applications 12.1 Where GP has Done Well 12.2 Curve Fitting, Data Modelling and Symbolic Regression 12.3 Human Competitive Results – the Humies 12.4 Image and Signal Processing 12.5 Financial Trading, Time Series, and Economic Modelling 12.6 Industrial Process Control 12.7 Medicine, Biology and Bioinformatics 12.8 GP to Create Searchers and Solvers – Hyper-heuristics xiii 12.9 Entertainment and Computer Games 127 12.10The Arts 127 12.11Compression 128 13 Troubleshooting GP 13.1 Is there a Bug in the Code? 13.2 Can you Trust your Results? 13.3 There are No Silver Bullets 13.4 Small Changes can have Big Effects 13.5 Big Changes can have No Effect 13.6 Study your Populations 13.7 Encourage Diversity 13.8 Embrace Approximation 13.9 Control Bloat 13.10 Checkpoint Results 13.11 Report Well 13.12 Convince your Customers 14 Conclusions Tricks of the Trade A Resources A.1 Key Books A.2 Key Journals A.3 Key International Meetings A.4 GP Implementations A.5 On-Line Resources 145 B TinyGP 151 B.1 Overview of TinyGP 151 B.2 Input Data Files for TinyGP 153 B.3 Source Code 154 B.4 Compiling and Running TinyGP 162 Bibliography 167 Inde

    Field Guide to Genetic Programming

    Get PDF

    Crossover control in selection hyper-heuristics: case studies using MKP and HyFlex

    Get PDF
    Hyper-heuristics are a class of high-level search methodologies which operate over a search space of heuristics rather than a search space of solutions. Hyper-heuristic research has set out to develop methods which are more general than traditional search and optimisation techniques. In recent years, focus has shifted considerably towards cross-domain heuristic search. The intention is to develop methods which are able to deliver an acceptable level of performance over a variety of different problem domains, given a set of low-level heuristics to work with. This thesis presents a body of work investigating the use of selection hyper-heuristics in a number of different problem domains. Specifically the use of crossover operators, prevalent in many evolutionary algorithms, is explored within the context of single-point search hyper-heuristics. A number of traditional selection hyper-heuristics are applied to instances of a well-known NP-hard combinatorial optimisation problem, the multidimensional knapsack problem. This domain is chosen as a benchmark for the variety of existing problem instances and solution methods available. The results suggest that selection hyper-heuristics are a viable method to solve some instances of this problem domain. Following this, a framework is defined to describe the conceptual level at which crossover low-level heuristics are managed in single-point selection hyper-heuristics. HyFlex is an existing software framework which supports the design of heuristic search methods over multiple problem domains, i.e. cross-domain optimisation. A traditional heuristic selection mechanism is modified in order to improve results in the context of cross-domain optimisation. Finally the effect of crossover use in cross-domain optimisation is explored

    Crossover control in selection hyper-heuristics: case studies using MKP and HyFlex

    Get PDF
    Hyper-heuristics are a class of high-level search methodologies which operate over a search space of heuristics rather than a search space of solutions. Hyper-heuristic research has set out to develop methods which are more general than traditional search and optimisation techniques. In recent years, focus has shifted considerably towards cross-domain heuristic search. The intention is to develop methods which are able to deliver an acceptable level of performance over a variety of different problem domains, given a set of low-level heuristics to work with. This thesis presents a body of work investigating the use of selection hyper-heuristics in a number of different problem domains. Specifically the use of crossover operators, prevalent in many evolutionary algorithms, is explored within the context of single-point search hyper-heuristics. A number of traditional selection hyper-heuristics are applied to instances of a well-known NP-hard combinatorial optimisation problem, the multidimensional knapsack problem. This domain is chosen as a benchmark for the variety of existing problem instances and solution methods available. The results suggest that selection hyper-heuristics are a viable method to solve some instances of this problem domain. Following this, a framework is defined to describe the conceptual level at which crossover low-level heuristics are managed in single-point selection hyper-heuristics. HyFlex is an existing software framework which supports the design of heuristic search methods over multiple problem domains, i.e. cross-domain optimisation. A traditional heuristic selection mechanism is modified in order to improve results in the context of cross-domain optimisation. Finally the effect of crossover use in cross-domain optimisation is explored

    Predicción de rendimiento y dificultad de problemas en programación genetica

    Get PDF
    La estimación de la dificultad de problemas es un tema abierto en Programación Genética (GP). El objetivo de este trabajo es generar modelosque puedan predecir el desempeño esperado de un clasificador basado en GP cuando este es aplicado a tareas de prueba. Los problemasde clasificación son descritos usando características de un dominio específico, algunas de las cuales son propuestas en nuestro trabajo y estascaracterísticas son dadas como entrada a los modelos predictivos. Nos referimos a estos modelos como predictores de desempeño esperado(PEPs, por sus siglas en inglés). Extendimos este enfoque usando un ensemble de predictores especializados (SPEPs, por sus siglas eninglés), dividiendo problemas de clasificación en grupos específicos y elegimos su correspondiente SPEP. Los predictores propuestos son entrenados usando problemas de clasificación sintéticos de 2D con conjunto de datos balanceados. Los modelos son entonces usados para predecir el desempeño de un clasificador de GP en problemas del mundo real antes no vistos los cuales son multidimensionales y desbalanceados. Ademas, este trabajo es el primero en proveer una predicción de rendimiento para un clasificador de GP sobre datos de prueba, mientras en trabajos previos se han enfocado en predecir el rendimiento para datos de entrenamiento. Por lo tanto, planteados como un problema de regresión simbólica son generados modelos predictivos exactos los cuales son resueltos con GP. Estos resultados son alcanzadosusando características altamente descriptivas e incluyendo un paso de reducción de dimensiones el cual simplifica el proceso de aprendizaje yprueba. El enfoque propuesto podría ser extendido a otros algoritmos de clasificación y usarlo como base de un sistema experto de selecciónde algoritmos.The estimation of problem difficulty is an open issue in Genetic Programming(GP). The goal of this work is to generate models that predictthe expected performance of a GP-based classifier when it is applied toan unseen task. Classification problems are described using domainspecificfeatures, some of which are proposed in this work, and thesefeatures are given as input to the predictive models. These models arereferred to as predictors of expected performance (PEPs). We extendthis approach by using an ensemble of specialized predictors (SPEP),dividing classification problems into groups and choosing the correspondingSPEP. The proposed predictors are trained using 2D syntheticclassification problems with balanced datasets. The models are thenused to predict the performance of the GP classifier on unseen realworlddatasets that are multidimensional and imbalanced. This workis the first to provide a performance prediction of a GP system on testdata, while previous works focused on predicting training performance.Accurate predictive models are generated by posing a symbolic regressiontask and solving it with GP. These results are achieved by usinghighly descriptive features and including a dimensionality reductionstage that simplifies the learning and testing process. The proposed approachcould be extended to other classification algorithms and usedas the basis of an expert system for algorithm selection
    corecore