261 research outputs found

    Novel Hyper-heuristics Applied to the Domain of Bin Packing

    Get PDF
    Principal to the ideology behind hyper-heuristic research is the desire to increase the level of generality of heuristic procedures so that they can be easily applied to a wide variety of problems to produce solutions of adequate quality within practical timescales.This thesis examines hyper-heuristics within a single problem domain, that of Bin Packing where the benefits to be gained from selecting or generating heuristics for large problem sets with widely differing characteristics is considered. Novel implementations of both selective and generative hyper-heuristics are proposed. The former approach attempts to map the characteristics of a problem to the heuristic that best solves it while the latter uses Genetic Programming techniques to automate the heuristic design process. Results obtained using the selective approach show that solution quality was improved significantly when contrasted to the performance of the best single heuristic when applied to large sets of diverse problem instances. Although enforcing the benefits to be gained by selecting from a range of heuristics the study also highlighted the lack of diversity in human designed algorithms. Using Genetic Programming techniques to automate the heuristic design process allowed both single heuristics and collectives of heuristics to be generated that were shown to perform significantly better than their human designed counterparts. The thesis concludes by combining both selective and generative hyper-heuristic approaches into a novel immune inspired system where heuristics that cover distinct areas of the problem space are generated. The system is shown to have a number of advantages over similar cooperative approaches in terms of its plasticity, efficiency and long term memory. Extensive testing of all of the hyper-heuristics developed on large sets of both benchmark and newly generated problem instances enforces the utility of hyper-heuristics in their goal of producing fast understandable procedures that give good quality solutions for a range of problems with widely varying characteristics

    A Lifelong Learning Hyper-heuristic Method for Bin Packing.

    Get PDF
    We describe a novel Hyper-heuristic system which continuously learns over time to solve a combinatorial optimisation problem. The system continuously generates new heuristics and samples problems from its environment; representative problems and heuristics are incorporated into a self-sustaining network of interacting entities in- spired by methods in Artificial Immune Systems.The network is plastic in both its structure and content leading to the following properties: it exploits existing knowl- edge captured in the network to rapidly produce solutions; it can adapt to new prob- lems with widely differing characteristics; it is capable of generalising over the prob- lem space. The system is tested on a large corpus of 3968 new instances of 1D-bin packing problems as well as on 1370 existing problems from the literature; it shows excellent performance in terms of the quality of solutions obtained across the datasets and in adapting to dynamically changing sets of problem instances compared to pre- vious approaches. As the network self-adapts to sustain a minimal repertoire of both problems and heuristics that form a representative map of the problem space, the system is further shown to be computationally efficient and therefore scalable

    QAL-BP: An Augmented Lagrangian Quantum Approach for Bin Packing Problem

    Full text link
    The bin packing is a well-known NP-Hard problem in the domain of artificial intelligence, posing significant challenges in finding efficient solutions. Conversely, recent advancements in quantum technologies have shown promising potential for achieving substantial computational speedup, particularly in certain problem classes, such as combinatorial optimization. In this study, we introduce QAL-BP, a novel Quadratic Unconstrained Binary Optimization (QUBO) formulation designed specifically for bin packing and suitable for quantum computation. QAL-BP utilizes the augmented Lagrangian method to incorporate the bin packing constraints into the objective function while also facilitating an analytical estimation of heuristic, but empirically robust, penalty multipliers. This approach leads to a more versatile and generalizable model that eliminates the need for empirically calculating instance-dependent Lagrangian coefficients, a requirement commonly encountered in alternative QUBO formulations for similar problems. To assess the effectiveness of our proposed approach, we conduct experiments on a set of bin-packing instances using a real Quantum Annealing device. Additionally, we compare the results with those obtained from two different classical solvers, namely simulated annealing and Gurobi. The experimental findings not only confirm the correctness of the proposed formulation but also demonstrate the potential of quantum computation in effectively solving the bin-packing problem, particularly as more reliable quantum technology becomes available.Comment: 14 pages, 4 figures, 1 tabl

    CHAMP: Creating Heuristics via Many Parameters for online bin packing

    Get PDF
    The online bin packing problem is a well-known bin packing variant which requires immediate decisions to be made for the placement of a lengthy sequence of arriving items of various sizes one at a time into fixed capacity bins without any overflow. The overall goal is maximising the average bin fullness. We investigate a ‘policy matrix’ representation which assigns a score for each decision option independently and the option with the highest value is chosen for one dimensional online bin packing. A policy matrix might also be considered as a heuristic with many parameters, where each parameter value is a score. We hence investigate a framework which can be used for creating heuristics via many parameters. The proposed framework combines a Genetic Algorithm optimiser, which searches the space of heuristics in policy matrix form, and an online bin packing simulator, which acts as the evaluation function. The empirical results indicate the success of the proposed approach, providing the best solutions for almost all item sequence generators used during the experiments. We also present a novel fitness landscape analysis on the search space of policies. This study hence gives evidence of the potential for automated discovery by intelligent systems of powerful heuristics for online problems; reducing the need for expensive use of human expertise

    Metaheuristic Design Patterns: New Perspectives for Larger-Scale Search Architectures

    Get PDF
    Design patterns capture the essentials of recurring best practice in an abstract form. Their merits are well established in domains as diverse as architecture and software development. They offer significant benefits, not least a common conceptual vocabulary for designers, enabling greater communication of high-level concerns and increased software reuse. Inspired by the success of software design patterns, this chapter seeks to promote the merits of a pattern-based method to the development of metaheuristic search software components. To achieve this, a catalog of patterns is presented, organized into the families of structural, behavioral, methodological and component-based patterns. As an alternative to the increasing specialization associated with individual metaheuristic search components, the authors encourage computer scientists to embrace the ‘cross cutting' benefits of a pattern-based perspective to optimization algorithms. Some ways in which the patterns might form the basis of further larger-scale metaheuristic component design automation are also discussed

    Multi objective particle swarm optimization: algorithms and applications

    Get PDF
    Ph.DDOCTOR OF PHILOSOPH

    A Field Guide to Genetic Programming

    Get PDF
    xiv, 233 p. : il. ; 23 cm.Libro ElectrónicoA Field Guide to Genetic Programming (ISBN 978-1-4092-0073-4) is an introduction to genetic programming (GP). GP is a systematic, domain-independent method for getting computers to solve problems automatically starting from a high-level statement of what needs to be done. Using ideas from natural evolution, GP starts from an ooze of random computer programs, and progressively refines them through processes of mutation and sexual recombination, until solutions emerge. All this without the user having to know or specify the form or structure of solutions in advance. GP has generated a plethora of human-competitive results and applications, including novel scientific discoveries and patentable inventions. The authorsIntroduction -- Representation, initialisation and operators in Tree-based GP -- Getting ready to run genetic programming -- Example genetic programming run -- Alternative initialisations and operators in Tree-based GP -- Modular, grammatical and developmental Tree-based GP -- Linear and graph genetic programming -- Probalistic genetic programming -- Multi-objective genetic programming -- Fast and distributed genetic programming -- GP theory and its applications -- Applications -- Troubleshooting GP -- Conclusions.Contents xi 1 Introduction 1.1 Genetic Programming in a Nutshell 1.2 Getting Started 1.3 Prerequisites 1.4 Overview of this Field Guide I Basics 2 Representation, Initialisation and GP 2.1 Representation 2.2 Initialising the Population 2.3 Selection 2.4 Recombination and Mutation Operators in Tree-based 3 Getting Ready to Run Genetic Programming 19 3.1 Step 1: Terminal Set 19 3.2 Step 2: Function Set 20 3.2.1 Closure 21 3.2.2 Sufficiency 23 3.2.3 Evolving Structures other than Programs 23 3.3 Step 3: Fitness Function 24 3.4 Step 4: GP Parameters 26 3.5 Step 5: Termination and solution designation 27 4 Example Genetic Programming Run 4.1 Preparatory Steps 29 4.2 Step-by-Step Sample Run 31 4.2.1 Initialisation 31 4.2.2 Fitness Evaluation Selection, Crossover and Mutation Termination and Solution Designation Advanced Genetic Programming 5 Alternative Initialisations and Operators in 5.1 Constructing the Initial Population 5.1.1 Uniform Initialisation 5.1.2 Initialisation may Affect Bloat 5.1.3 Seeding 5.2 GP Mutation 5.2.1 Is Mutation Necessary? 5.2.2 Mutation Cookbook 5.3 GP Crossover 5.4 Other Techniques 32 5.5 Tree-based GP 39 6 Modular, Grammatical and Developmental Tree-based GP 47 6.1 Evolving Modular and Hierarchical Structures 47 6.1.1 Automatically Defined Functions 48 6.1.2 Program Architecture and Architecture-Altering 50 6.2 Constraining Structures 51 6.2.1 Enforcing Particular Structures 52 6.2.2 Strongly Typed GP 52 6.2.3 Grammar-based Constraints 53 6.2.4 Constraints and Bias 55 6.3 Developmental Genetic Programming 57 6.4 Strongly Typed Autoconstructive GP with PushGP 59 7 Linear and Graph Genetic Programming 61 7.1 Linear Genetic Programming 61 7.1.1 Motivations 61 7.1.2 Linear GP Representations 62 7.1.3 Linear GP Operators 64 7.2 Graph-Based Genetic Programming 65 7.2.1 Parallel Distributed GP (PDGP) 65 7.2.2 PADO 67 7.2.3 Cartesian GP 67 7.2.4 Evolving Parallel Programs using Indirect Encodings 68 8 Probabilistic Genetic Programming 8.1 Estimation of Distribution Algorithms 69 8.2 Pure EDA GP 71 8.3 Mixing Grammars and Probabilities 74 9 Multi-objective Genetic Programming 75 9.1 Combining Multiple Objectives into a Scalar Fitness Function 75 9.2 Keeping the Objectives Separate 76 9.2.1 Multi-objective Bloat and Complexity Control 77 9.2.2 Other Objectives 78 9.2.3 Non-Pareto Criteria 80 9.3 Multiple Objectives via Dynamic and Staged Fitness Functions 80 9.4 Multi-objective Optimisation via Operator Bias 81 10 Fast and Distributed Genetic Programming 83 10.1 Reducing Fitness Evaluations/Increasing their Effectiveness 83 10.2 Reducing Cost of Fitness with Caches 86 10.3 Parallel and Distributed GP are Not Equivalent 88 10.4 Running GP on Parallel Hardware 89 10.4.1 Master–slave GP 89 10.4.2 GP Running on GPUs 90 10.4.3 GP on FPGAs 92 10.4.4 Sub-machine-code GP 93 10.5 Geographically Distributed GP 93 11 GP Theory and its Applications 97 11.1 Mathematical Models 98 11.2 Search Spaces 99 11.3 Bloat 101 11.3.1 Bloat in Theory 101 11.3.2 Bloat Control in Practice 104 III Practical Genetic Programming 12 Applications 12.1 Where GP has Done Well 12.2 Curve Fitting, Data Modelling and Symbolic Regression 12.3 Human Competitive Results – the Humies 12.4 Image and Signal Processing 12.5 Financial Trading, Time Series, and Economic Modelling 12.6 Industrial Process Control 12.7 Medicine, Biology and Bioinformatics 12.8 GP to Create Searchers and Solvers – Hyper-heuristics xiii 12.9 Entertainment and Computer Games 127 12.10The Arts 127 12.11Compression 128 13 Troubleshooting GP 13.1 Is there a Bug in the Code? 13.2 Can you Trust your Results? 13.3 There are No Silver Bullets 13.4 Small Changes can have Big Effects 13.5 Big Changes can have No Effect 13.6 Study your Populations 13.7 Encourage Diversity 13.8 Embrace Approximation 13.9 Control Bloat 13.10 Checkpoint Results 13.11 Report Well 13.12 Convince your Customers 14 Conclusions Tricks of the Trade A Resources A.1 Key Books A.2 Key Journals A.3 Key International Meetings A.4 GP Implementations A.5 On-Line Resources 145 B TinyGP 151 B.1 Overview of TinyGP 151 B.2 Input Data Files for TinyGP 153 B.3 Source Code 154 B.4 Compiling and Running TinyGP 162 Bibliography 167 Inde

    A Polyhedral Study of Mixed 0-1 Set

    Get PDF
    We consider a variant of the well-known single node fixed charge network flow set with constant capacities. This set arises from the relaxation of more general mixed integer sets such as lot-sizing problems with multiple suppliers. We provide a complete polyhedral characterization of the convex hull of the given set

    From algorithm selection to generation using deep learning

    Get PDF
    Algorithm selection and generation techniques are two methods that can be used to exploit the performance complementarity of different algorithms when applied to large diverse sets of combinatorial problem instances. As there is no single algorithm that dominates all others on all problem instances, algorithm selection automatically selects an algorithm expected to perform best for each problem instance. Meanwhile, algorithm generation refers to combining different algorithms in a manner that allows the resulting method to improve the efficacy of a pool of algorithms. This thesis examines algorithm selection and generation within a single streaming problem domain, that is Bin-Packing, where novel approaches are proposed and evaluated on large problem sets. This research starts with presenting a novel feature-free approach to select the best performing heuristic by capturing the sequential information implicit in a streaming instance and using this as direct input to two Deep Learning (DL) models, Long-Short-Term Memory (LSTM) or Gated Recurrent Unit (GRU), to learn a mapping from an instance to an algorithm. Results obtained using the proposed approach show that the performance of the feature-free selectors significantly outperforms the performance of both the single best solver and the classical feature-based approach using well-known Machine Learning (ML) classifiers when applied to large sets of diverse problem instances. Next, a more radical approach is proposed: bypass algorithm selection altogether by training encoder-decoder LSTM using solutions obtained from a set of algorithms to directly predict a solution from the instance data behaving as an automatically generated algorithm. Experiments conducted on large datasets using problem batches of varying sizes show that the generated algorithm is able to accurately predict solutions, particularly with small batch sizes. Finally, the thesis develops the proposed encoder-decoder approach by introducing a novel neural approach for generating algorithms, in which a neural network acts as an algorithm by generating decisions. Two architectures are evaluated, an encoder-decoder LSTM and a feed-forward Neural Network (NN), and trained using the decisions output from existing algorithms on a large set of instances. Experiments show that the new generated algorithms are capable of solving a subset of instances better than the well-known bin-packing algorithms, and hence they can significantly improve the overall performance when they are added to a pool of algorithms
    corecore