2,605 research outputs found

    Representative scenario construction and preprocessing for robust combinatorial optimization problems

    Get PDF
    In robust combinatorial optimization with discrete uncertainty, approximation algorithms based on constructing a single scenario representing the whole uncertainty set are frequently used. One is the midpoint method, which uses the average case scenario. It is known to be an N-approximation, where N is the number of scenarios. In this paper, we present a linear program to construct a representative scenario for the uncertainty set, which gives an approximation guarantee that is at least as good as for previous methods. We further employ hyper heuristic techniques operating over a space of preprocessing and aggregation steps to evolve algorithms that construct alternative representative single scenarios for the uncertainty set. In numerical experiments on the selection problem we demonstrate that our approaches can improve the approximation guarantee of the midpoint approach by more than 20%

    Certainty Closure: Reliable Constraint Reasoning with Incomplete or Erroneous Data

    Full text link
    Constraint Programming (CP) has proved an effective paradigm to model and solve difficult combinatorial satisfaction and optimisation problems from disparate domains. Many such problems arising from the commercial world are permeated by data uncertainty. Existing CP approaches that accommodate uncertainty are less suited to uncertainty arising due to incomplete and erroneous data, because they do not build reliable models and solutions guaranteed to address the user's genuine problem as she perceives it. Other fields such as reliable computation offer combinations of models and associated methods to handle these types of uncertain data, but lack an expressive framework characterising the resolution methodology independently of the model. We present a unifying framework that extends the CP formalism in both model and solutions, to tackle ill-defined combinatorial problems with incomplete or erroneous data. The certainty closure framework brings together modelling and solving methodologies from different fields into the CP paradigm to provide reliable and efficient approches for uncertain constraint problems. We demonstrate the applicability of the framework on a case study in network diagnosis. We define resolution forms that give generic templates, and their associated operational semantics, to derive practical solution methods for reliable solutions.Comment: Revised versio

    From Parameter Tuning to Dynamic Heuristic Selection

    Get PDF
    The importance of balance between exploration and exploitation plays a crucial role while solving combinatorial optimization problems. This balance is reached by two general techniques: by using an appropriate problem solver and by setting its proper parameters. Both problems were widely studied in the past and the research process continues up until now. The latest studies in the field of automated machine learning propose merging both problems, solving them at design time, and later strengthening the results at runtime. To the best of our knowledge, the generalized approach for solving the parameter setting problem in heuristic solvers has not yet been proposed. Therefore, the concept of merging heuristic selection and parameter control have not been introduced. In this thesis, we propose an approach for generic parameter control in meta-heuristics by means of reinforcement learning (RL). Making a step further, we suggest a technique for merging the heuristic selection and parameter control problems and solving them at runtime using RL-based hyper-heuristic. The evaluation of the proposed parameter control technique on a symmetric traveling salesman problem (TSP) revealed its applicability by reaching the performance of tuned in online and used in isolation underlying meta-heuristic. Our approach provides the results on par with the best underlying heuristics with tuned parameters.:1 Introduction 1 1.1 Motivation 1 1.2 Research objective 2 1.3 Solution overview 2 2 Background and RelatedWork Analysis 3 2.1 Optimization Problems and their Solvers 3 2.2 Heuristic Solvers for Optimization Problems 9 2.3 Setting Algorithm Parameters 19 2.4 Combined Algorithm Selection and Hyper-Parameter Tuning Problem 27 2.5 Conclusion on Background and Related Work Analysis 28 3 Online Selection Hyper-Heuristic with Generic Parameter Control 31 3.1 Combined Parameter Control and Algorithm Selection Problem 31 3.2 Search Space Structure 32 3.3 Parameter Prediction Process 34 3.4 Low-Level Heuristics 35 3.5 Conclusion of Concept 36 4 Implementation Details 37 4.2 Search Space 40 4.3 Prediction Process 43 4.4 Low Level Heuristics 48 4.5 Conclusion 52 5 Evaluation 55 5.1 Optimization Problem 55 5.2 Environment Setup 56 5.3 Meta-heuristics Tuning 56 5.4 Concept Evaluation 60 5.5 Analysis of HH-PC Settings 74 5.6 Conclusion 79 6 Conclusion 81 7 FutureWork 83 7.1 Prediction Process 83 7.2 Search Space 84 7.3 Evaluations and Benchmarks 84 Bibliography 87 A Evaluation Results 99 A.1 Results in Figures 99 A.2 Results in numbers 10
    • …
    corecore