122 research outputs found

    Algorithm Engineering in Robust Optimization

    Full text link
    Robust optimization is a young and emerging field of research having received a considerable increase of interest over the last decade. In this paper, we argue that the the algorithm engineering methodology fits very well to the field of robust optimization and yields a rewarding new perspective on both the current state of research and open research directions. To this end we go through the algorithm engineering cycle of design and analysis of concepts, development and implementation of algorithms, and theoretical and experimental evaluation. We show that many ideas of algorithm engineering have already been applied in publications on robust optimization. Most work on robust optimization is devoted to analysis of the concepts and the development of algorithms, some papers deal with the evaluation of a particular concept in case studies, and work on comparison of concepts just starts. What is still a drawback in many papers on robustness is the missing link to include the results of the experiments again in the design

    Incorporating Memory and Learning Mechanisms Into Meta-RaPS

    Get PDF
    Due to the rapid increase of dimensions and complexity of real life problems, it has become more difficult to find optimal solutions using only exact mathematical methods. The need to find near-optimal solutions in an acceptable amount of time is a challenge when developing more sophisticated approaches. A proper answer to this challenge can be through the implementation of metaheuristic approaches. However, a more powerful answer might be reached by incorporating intelligence into metaheuristics. Meta-RaPS (Metaheuristic for Randomized Priority Search) is a metaheuristic that creates high quality solutions for discrete optimization problems. It is proposed that incorporating memory and learning mechanisms into Meta-RaPS, which is currently classified as a memoryless metaheuristic, can help the algorithm produce higher quality results. The proposed Meta-RaPS versions were created by taking different perspectives of learning. The first approach taken is Estimation of Distribution Algorithms (EDA), a stochastic learning technique that creates a probability distribution for each decision variable to generate new solutions. The second Meta-RaPS version was developed by utilizing a machine learning algorithm, Q Learning, which has been successfully applied to optimization problems whose output is a sequence of actions. In the third Meta-RaPS version, Path Relinking (PR) was implemented as a post-optimization method in which the new algorithm learns the good attributes by memorizing best solutions, and follows them to reach better solutions. The fourth proposed version of Meta-RaPS presented another form of learning with its ability to adaptively tune parameters. The efficiency of these approaches motivated us to redesign Meta-RaPS by removing the improvement phase and adding a more sophisticated Path Relinking method. The new Meta-RaPS could solve even the largest problems in much less time while keeping up the quality of its solutions. To evaluate their performance, all introduced versions were tested using the 0-1 Multidimensional Knapsack Problem (MKP). After comparing the proposed algorithms, Meta-RaPS PR and Meta-RaPS Q Learning appeared to be the algorithms with the best and worst performance, respectively. On the other hand, they could all show superior performance than other approaches to the 0-1 MKP in the literature

    Matheuristics:survey and synthesis

    Get PDF
    In integer programming and combinatorial optimisation, people use the term matheuristics to refer to methods that are heuristic in nature, but draw on concepts from the literature on exact methods. We survey the literature on this topic, with a particular emphasis on matheuristics that yield both primal and dual bounds (i.e., upper and lower bounds in the case of a minimisation problem). We also make some comments about possible future developments

    Matheuristics: using mathematics for heuristic design

    Get PDF
    Matheuristics are heuristic algorithms based on mathematical tools such as the ones provided by mathematical programming, that are structurally general enough to be applied to different problems with little adaptations to their abstract structure. The result can be metaheuristic hybrids having components derived from the mathematical model of the problems of interest, but the mathematical techniques themselves can define general heuristic solution frameworks. In this paper, we focus our attention on mathematical programming and its contributions to developing effective heuristics. We briefly describe the mathematical tools available and then some matheuristic approaches, reporting some representative examples from the literature. We also take the opportunity to provide some ideas for possible future development

    Meta-raps: Parameter Setting And New Applications

    Get PDF
    Recently meta-heuristics have become a popular solution methodology, in terms of both research and application, for solving combinatorial optimization problems. Meta-heuristic methods guide simple heuristics or priority rules designed to solve a particular problem. Meta-heuristics enhance these simple heuristics by using a higher level strategy. The advantage of using meta-heuristics over conventional optimization methods is meta-heuristics are able to find good (near optimal) solutions within a reasonable computation time. Investigating this line of research is justified because in most practical cases with medium to large scale problems, the use of meta-heuristics is necessary to be able to find a solution in a reasonable time. The specific meta-heuristic studied in this research is, Meta-RaPS; Meta-heuristic for Randomized Priority Search which is developed by DePuy and Whitehouse in 2001. Meta-RaPS is a generic, high level strategy used to modify greedy algorithms based on the insertion of a random element (Moraga, 2002). To date, Meta-RaPS had been applied to different types of combinatorial optimization problems and achieved comparable solution performance to other meta-heuristic techniques. The specific problem studied in this dissertation is parameter setting of Meta-RaPS. The topic of parameter setting for meta-heuristics has not been extensively studied in the literature. Although the parameter setting method devised in this dissertation is used primarily on Meta-RaPS, it is applicable to any meta-heuristic\u27s parameter setting problem. This dissertation not only enhances the power of Meta-RaPS by parameter tuning but also it introduces a robust parameter selection technique with wide-spread utility for many meta-heuristics. Because the distribution of solution values generated by meta-heuristics for combinatorial optimization problems is not normal, the current parameter setting techniques which employ a parametric approach based on the assumption of normality may not be appropriate. The proposed method is Non-parametric Based Genetic Algorithms. Based on statistical tests, the Non-parametric Based Genetic Algorithms (NPGA) is able to enhance the solution quality of Meta-RaPS more than any other parameter setting procedures benchmarked in this research. NPGA sets the best parameter settings, of all the methods studied, for 38 of the 41 Early/Tardy Single Machine Scheduling with Common Due Date and Sequence-Dependent Setup Time (ETP) problems and 50 of the 54 0-1 Multidimensional Knapsack Problems (0-1 MKP). In addition to the parameter setting procedure discussed, this dissertation provides two Meta-RaPS combinatorial optimization problem applications, the 0-1 MKP, and the ETP. For the ETP problem, the Meta-RaPS application in this dissertation currently gives the best meta-heuristic solution performance so far in the literature for common ETP test sets. For the large ETP test set, Meta-RaPS provided better solution performance than Simulated Annealing (SA) for 55 of the 60 problems. For the small test set, in all four different small problem sets, the Meta-RaPS solution performance outperformed exiting algorithms in terms of average percent deviation from the optimal solution value. For the 0-1 MKP, the present Meta-RaPS application performs better than the earlier Meta-RaPS applications by other researchers on this problem. The Meta-RaPS 0-1 MKP application presented here has better solution quality than the existing Meta-RaPS application (Moraga, 2005) found in the literature. Meta-RaPS gives 0.75% average percent deviation, from the best known solutions, for the 270 0-1 MKP test problems

    Co-evolutionary Hybrid Bi-level Optimization

    Get PDF
    Multi-level optimization stems from the need to tackle complex problems involving multiple decision makers. Two-level optimization, referred as ``Bi-level optimization'', occurs when two decision makers only control part of the decision variables but impact each other (e.g., objective value, feasibility). Bi-level problems are sequential by nature and can be represented as nested optimization problems in which one problem (the ``upper-level'') is constrained by another one (the ``lower-level''). The nested structure is a real obstacle that can be highly time consuming when the lower-level is NP−hard\mathcal{NP}-hard. Consequently, classical nested optimization should be avoided. Some surrogate-based approaches have been proposed to approximate the lower-level objective value function (or variables) to reduce the number of times the lower-level is globally optimized. Unfortunately, such a methodology is not applicable for large-scale and combinatorial bi-level problems. After a deep study of theoretical properties and a survey of the existing applications being bi-level by nature, problems which can benefit from a bi-level reformulation are investigated. A first contribution of this work has been to propose a novel bi-level clustering approach. Extending the well-know ``uncapacitated k-median problem'', it has been shown that clustering can be easily modeled as a two-level optimization problem using decomposition techniques. The resulting two-level problem is then turned into a bi-level problem offering the possibility to combine distance metrics in a hierarchical manner. The novel bi-level clustering problem has a very interesting property that enable us to tackle it with classical nested approaches. Indeed, its lower-level problem can be solved in polynomial time. In cooperation with the Luxembourg Centre for Systems Biomedicine (LCSB), this new clustering model has been applied on real datasets such as disease maps (e.g. Parkinson, Alzheimer). Using a novel hybrid and parallel genetic algorithm as optimization approach, the results obtained after a campaign of experiments have the ability to produce new knowledge compared to classical clustering techniques combining distance metrics in a classical manner. The previous bi-level clustering model has the advantage that the lower-level can be solved in polynomial time although the global problem is by definition NP\mathcal{NP}-hard. Therefore, next investigations have been undertaken to tackle more general bi-level problems in which the lower-level problem does not present any specific advantageous properties. Since the lower-level problem can be very expensive to solve, the focus has been turned to surrogate-based approaches and hyper-parameter optimization techniques with the aim of approximating the lower-level problem and reduce the number of global lower-level optimizations. Adapting the well-know bayesian optimization algorithm to solve general bi-level problems, the expensive lower-level optimizations have been dramatically reduced while obtaining very accurate solutions. The resulting solutions and the number of spared lower-level optimizations have been compared to the bi-level evolutionary algorithm based on quadratic approximations (BLEAQ) results after a campaign of experiments on official bi-level benchmarks. Although both approaches are very accurate, the bi-level bayesian version required less lower-level objective function calls. Surrogate-based approaches are restricted to small-scale and continuous bi-level problems although many real applications are combinatorial by nature. As for continuous problems, a study has been performed to apply some machine learning strategies. Instead of approximating the lower-level solution value, new approximation algorithms for the discrete/combinatorial case have been designed. Using the principle employed in GP hyper-heuristics, heuristics are trained in order to tackle efficiently the NP−hard\mathcal{NP}-hard lower-level of bi-level problems. This automatic generation of heuristics permits to break the nested structure into two separated phases: \emph{training lower-level heuristics} and \emph{solving the upper-level problem with the new heuristics}. At this occasion, a second modeling contribution has been introduced through a novel large-scale and mixed-integer bi-level problem dealing with pricing in the cloud, i.e., the Bi-level Cloud Pricing Optimization Problem (BCPOP). After a series of experiments that consisted in training heuristics on various lower-level instances of the BCPOP and using them to tackle the bi-level problem itself, the obtained results are compared to the ``cooperative coevolutionary algorithm for bi-level optimization'' (COBRA). Although training heuristics enables to \emph{break the nested structure}, a two phase optimization is still required. Therefore, the emphasis has been put on training heuristics while optimizing the upper-level problem using competitive co-evolution. Instead of adopting the classical decomposition scheme as done by COBRA which suffers from the strong epistatic links between lower-level and upper-level variables, co-evolving the solution and the mean to get to it can cope with these epistatic link issues. The ``CARBON'' algorithm developed in this thesis is a competitive and hybrid co-evolutionary algorithm designed for this purpose. In order to validate the potential of CARBON, numerical experiments have been designed and results have been compared to state-of-the-art algorithms. These results demonstrate that ``CARBON'' makes possible to address nested optimization efficiently
    • …
    corecore