75 research outputs found

    Expected Fitness Gains of Randomized Search Heuristics for the Traveling Salesperson Problem.

    Get PDF
    Randomized search heuristics are frequently applied to NP-hard combinatorial optimization problems. The runtime analysis of randomized search heuristics has contributed tremendously to their theoretical understanding. Recently, randomized search heuristics have been examined regarding their achievable progress within a fixed time budget. We follow this approach and present a fixed budget analysis for an NP-hard combinatorial optimization problem. We consider the well-known Traveling Salesperson problem (TSP) and analyze the fitness increase that randomized search heuristics are able to achieve within a given fixed time budget. In particular, we analyze Manhattan and Euclidean TSP instances and Randomized Local Search (RLS), (1 + 1) EA and (1 + λ) EA algorithms for the TSP in a smoothed complexity setting and derive the lower bounds of the expected fitness gain for a specified number of generations

    Meta-raps: Parameter Setting And New Applications

    Get PDF
    Recently meta-heuristics have become a popular solution methodology, in terms of both research and application, for solving combinatorial optimization problems. Meta-heuristic methods guide simple heuristics or priority rules designed to solve a particular problem. Meta-heuristics enhance these simple heuristics by using a higher level strategy. The advantage of using meta-heuristics over conventional optimization methods is meta-heuristics are able to find good (near optimal) solutions within a reasonable computation time. Investigating this line of research is justified because in most practical cases with medium to large scale problems, the use of meta-heuristics is necessary to be able to find a solution in a reasonable time. The specific meta-heuristic studied in this research is, Meta-RaPS; Meta-heuristic for Randomized Priority Search which is developed by DePuy and Whitehouse in 2001. Meta-RaPS is a generic, high level strategy used to modify greedy algorithms based on the insertion of a random element (Moraga, 2002). To date, Meta-RaPS had been applied to different types of combinatorial optimization problems and achieved comparable solution performance to other meta-heuristic techniques. The specific problem studied in this dissertation is parameter setting of Meta-RaPS. The topic of parameter setting for meta-heuristics has not been extensively studied in the literature. Although the parameter setting method devised in this dissertation is used primarily on Meta-RaPS, it is applicable to any meta-heuristic\u27s parameter setting problem. This dissertation not only enhances the power of Meta-RaPS by parameter tuning but also it introduces a robust parameter selection technique with wide-spread utility for many meta-heuristics. Because the distribution of solution values generated by meta-heuristics for combinatorial optimization problems is not normal, the current parameter setting techniques which employ a parametric approach based on the assumption of normality may not be appropriate. The proposed method is Non-parametric Based Genetic Algorithms. Based on statistical tests, the Non-parametric Based Genetic Algorithms (NPGA) is able to enhance the solution quality of Meta-RaPS more than any other parameter setting procedures benchmarked in this research. NPGA sets the best parameter settings, of all the methods studied, for 38 of the 41 Early/Tardy Single Machine Scheduling with Common Due Date and Sequence-Dependent Setup Time (ETP) problems and 50 of the 54 0-1 Multidimensional Knapsack Problems (0-1 MKP). In addition to the parameter setting procedure discussed, this dissertation provides two Meta-RaPS combinatorial optimization problem applications, the 0-1 MKP, and the ETP. For the ETP problem, the Meta-RaPS application in this dissertation currently gives the best meta-heuristic solution performance so far in the literature for common ETP test sets. For the large ETP test set, Meta-RaPS provided better solution performance than Simulated Annealing (SA) for 55 of the 60 problems. For the small test set, in all four different small problem sets, the Meta-RaPS solution performance outperformed exiting algorithms in terms of average percent deviation from the optimal solution value. For the 0-1 MKP, the present Meta-RaPS application performs better than the earlier Meta-RaPS applications by other researchers on this problem. The Meta-RaPS 0-1 MKP application presented here has better solution quality than the existing Meta-RaPS application (Moraga, 2005) found in the literature. Meta-RaPS gives 0.75% average percent deviation, from the best known solutions, for the 270 0-1 MKP test problems

    Towards a Theory-Guided Benchmarking Suite for Discrete Black-Box Optimization Heuristics: Profiling (1+λ)(1+\lambda) EA Variants on OneMax and LeadingOnes

    Full text link
    Theoretical and empirical research on evolutionary computation methods complement each other by providing two fundamentally different approaches towards a better understanding of black-box optimization heuristics. In discrete optimization, both streams developed rather independently of each other, but we observe today an increasing interest in reconciling these two sub-branches. In continuous optimization, the COCO (COmparing Continuous Optimisers) benchmarking suite has established itself as an important platform that theoreticians and practitioners use to exchange research ideas and questions. No widely accepted equivalent exists in the research domain of discrete black-box optimization. Marking an important step towards filling this gap, we adjust the COCO software to pseudo-Boolean optimization problems, and obtain from this a benchmarking environment that allows a fine-grained empirical analysis of discrete black-box heuristics. In this documentation we demonstrate how this test bed can be used to profile the performance of evolutionary algorithms. More concretely, we study the optimization behavior of several (1+λ)(1+\lambda) EA variants on the two benchmark problems OneMax and LeadingOnes. This comparison motivates a refined analysis for the optimization time of the (1+λ)(1+\lambda) EA on LeadingOnes

    Improved Fixed-Budget Results via Drift Analysis

    Full text link
    Fixed-budget theory is concerned with computing or bounding the fitness value achievable by randomized search heuristics within a given budget of fitness function evaluations. Despite recent progress in fixed-budget theory, there is a lack of general tools to derive such results. We transfer drift theory, the key tool to derive expected optimization times, to the fixed-budged perspective. A first and easy-to-use statement concerned with iterating drift in so-called greed-admitting scenarios immediately translates into bounds on the expected function value. Afterwards, we consider a more general tool based on the well-known variable drift theorem. Applications of this technique to the LeadingOnes benchmark function yield statements that are more precise than the previous state of the art.Comment: 25 pages. An extended abstract of this paper will be published in the proceedings of PPSN 202

    Quantum annealing for vehicle routing and scheduling problems

    Get PDF
    Metaheuristic approaches to solving combinatorial optimization problems have many attractions. They sidestep the issue of combinatorial explosion; they return good results; they are often conceptually simple and straight forward to implement. There are also shortcomings. Optimal solutions are not guaranteed; choosing the metaheuristic which best fits a problem is a matter of experimentation; and conceptual differences between metaheuristics make absolute comparisons of performance difficult. There is also the difficulty of configuration of the algorithm - the process of identifying precise values for the parameters which control the optimization process. Quantum annealing is a metaheuristic which is the quantum counterpart of the well known classical Simulated Annealing algorithm for combinatorial optimization problems. This research investigates the application of quantum annealing to the Vehicle Routing Problem, a difficult problem of practical significance within industries such as logistics and workforce scheduling. The work devises spin encoding schemes for routing and scheduling problem domains, enabling an effective quantum annealing algorithm which locates new solutions to widely used benchmarks. The performance of the metaheuristic is further improved by the development of an enhanced tuning approach using fitness clouds as behaviour models. The algorithm is shown to be further enhanced by taking advantage of multiprocessor environments, using threading techniques to parallelize the optimization workload. The work also shows quantum annealing applied successfully in an industrial setting to generate solutions to complex scheduling problems, results which created extra savings over an incumbent optimization technique. Components of the intellectual property rendered in this latter effort went on to secure a patent-protected status

    Preventing premature convergence and proving the optimality in evolutionary algorithms

    Get PDF
    http://ea2013.inria.fr//proceedings.pdfInternational audienceEvolutionary Algorithms (EA) usually carry out an efficient exploration of the search-space, but get often trapped in local minima and do not prove the optimality of the solution. Interval-based techniques, on the other hand, yield a numerical proof of optimality of the solution. However, they may fail to converge within a reasonable time due to their inability to quickly compute a good approximation of the global minimum and their exponential complexity. The contribution of this paper is a hybrid algorithm called Charibde in which a particular EA, Differential Evolution, cooperates with a Branch and Bound algorithm endowed with interval propagation techniques. It prevents premature convergence toward local optima and outperforms both deterministic and stochastic existing approaches. We demonstrate its efficiency on a benchmark of highly multimodal problems, for which we provide previously unknown global minima and certification of optimality

    Complexity Theory for Discrete Black-Box Optimization Heuristics

    Full text link
    A predominant topic in the theory of evolutionary algorithms and, more generally, theory of randomized black-box optimization techniques is running time analysis. Running time analysis aims at understanding the performance of a given heuristic on a given problem by bounding the number of function evaluations that are needed by the heuristic to identify a solution of a desired quality. As in general algorithms theory, this running time perspective is most useful when it is complemented by a meaningful complexity theory that studies the limits of algorithmic solutions. In the context of discrete black-box optimization, several black-box complexity models have been developed to analyze the best possible performance that a black-box optimization algorithm can achieve on a given problem. The models differ in the classes of algorithms to which these lower bounds apply. This way, black-box complexity contributes to a better understanding of how certain algorithmic choices (such as the amount of memory used by a heuristic, its selective pressure, or properties of the strategies that it uses to create new solution candidates) influences performance. In this chapter we review the different black-box complexity models that have been proposed in the literature, survey the bounds that have been obtained for these models, and discuss how the interplay of running time analysis and black-box complexity can inspire new algorithmic solutions to well-researched problems in evolutionary computation. We also discuss in this chapter several interesting open questions for future work.Comment: This survey article is to appear (in a slightly modified form) in the book "Theory of Randomized Search Heuristics in Discrete Search Spaces", which will be published by Springer in 2018. The book is edited by Benjamin Doerr and Frank Neumann. Missing numbers of pointers to other chapters of this book will be added as soon as possibl

    Analysis of combinatorial search spaces for a class of NP-hard problems, An

    Get PDF
    2011 Spring.Includes bibliographical references.Given a finite but very large set of states X and a real-valued objective function ƒ defined on X, combinatorial optimization refers to the problem of finding elements of X that maximize (or minimize) ƒ. Many combinatorial search algorithms employ some perturbation operator to hill-climb in the search space. Such perturbative local search algorithms are state of the art for many classes of NP-hard combinatorial optimization problems such as maximum k-satisfiability, scheduling, and problems of graph theory. In this thesis we analyze combinatorial search spaces by expanding the objective function into a (sparse) series of basis functions. While most analyses of the distribution of function values in the search space must rely on empirical sampling, the basis function expansion allows us to directly study the distribution of function values across regions of states for combinatorial problems without the need for sampling. We concentrate on objective functions that can be expressed as bounded pseudo-Boolean functions which are NP-hard to solve in general. We use the basis expansion to construct a polynomial-time algorithm for exactly computing constant-degree moments of the objective function ƒ over arbitrarily large regions of the search space. On functions with restricted codomains, these moments are related to the true distribution by a system of linear equations. Given low moments supplied by our algorithm, we construct bounds of the true distribution of ƒ over regions of the space using a linear programming approach. A straightforward relaxation allows us to efficiently approximate the distribution and hence quickly estimate the count of states in a given region that have certain values under the objective function. The analysis is also useful for characterizing properties of specific combinatorial problems. For instance, by connecting search space analysis to the theory of inapproximability, we prove that the bound specified by Grover's maximum principle for the Max-Ek-Lin-2 problem is sharp. Moreover, we use the framework to prove certain configurations are forbidden in regions of the Max-3-Sat search space, supplying the first theoretical confirmation of empirical results by others. Finally, we show that theoretical results can be used to drive the design of algorithms in a principled manner by using the search space analysis developed in this thesis in algorithmic applications. First, information obtained from our moment retrieving algorithm can be used to direct a hill-climbing search across plateaus in the Max-k-Sat search space. Second, the analysis can be used to control the mutation rate on a (1+1) evolutionary algorithm on bounded pseudo-Boolean functions so that the offspring of each search point is maximized in expectation. For these applications, knowledge of the search space structure supplied by the analysis translates to significant gains in the performance of search
    corecore