21,298 research outputs found

    Combination of Evolutionary Algorithms with Experimental Design, Traditional Optimization and Machine Learning

    Get PDF
    Evolutionary algorithms alone cannot solve optimization problems very efficiently since there are many random (not very rational) decisions in these algorithms. Combination of evolutionary algorithms and other techniques have been proven to be an efficient optimization methodology. In this talk, I will explain the basic ideas of our three algorithms along this line (1): Orthogonal genetic algorithm which treats crossover/mutation as an experimental design problem, (2) Multiobjective evolutionary algorithm based on decomposition (MOEA/D) which uses decomposition techniques from traditional mathematical programming in multiobjective optimization evolutionary algorithm, and (3) Regular model based multiobjective estimation of distribution algorithms (RM-MEDA) which uses the regular property and machine learning methods for improving multiobjective evolutionary algorithms

    Towards efficient multiobjective optimization: multiobjective statistical criterions

    Get PDF
    The use of Surrogate Based Optimization (SBO) is widely spread in engineering design to reduce the number of computational expensive simulations. However, "real-world" problems often consist of multiple, conflicting objectives leading to a set of equivalent solutions (the Pareto front). The objectives are often aggregated into a single cost function to reduce the computational cost, though a better approach is to use multiobjective optimization methods to directly identify a set of Pareto-optimal solutions, which can be used by the designer to make more efficient design decisions (instead of making those decisions upfront). Most of the work in multiobjective optimization is focused on MultiObjective Evolutionary Algorithms (MOEAs). While MOEAs are well-suited to handle large, intractable design spaces, they typically require thousands of expensive simulations, which is prohibitively expensive for the problems under study. Therefore, the use of surrogate models in multiobjective optimization, denoted as MultiObjective Surrogate-Based Optimization (MOSBO), may prove to be even more worthwhile than SBO methods to expedite the optimization process. In this paper, the authors propose the Efficient Multiobjective Optimization (EMO) algorithm which uses Kriging models and multiobjective versions of the expected improvement and probability of improvement criterions to identify the Pareto front with a minimal number of expensive simulations. The EMO algorithm is applied on multiple standard benchmark problems and compared against the well-known NSGA-II and SPEA2 multiobjective optimization methods with promising results

    Fast calculation of multiobjective probability of improvement and expected improvement criteria for Pareto optimization

    Get PDF
    The use of surrogate based optimization (SBO) is widely spread in engineering design to reduce the number of computational expensive simulations. However, "real-world" problems often consist of multiple, conflicting objectives leading to a set of competitive solutions (the Pareto front). The objectives are often aggregated into a single cost function to reduce the computational cost, though a better approach is to use multiobjective optimization methods to directly identify a set of Pareto-optimal solutions, which can be used by the designer to make more efficient design decisions (instead of weighting and aggregating the costs upfront). Most of the work in multiobjective optimization is focused on multiobjective evolutionary algorithms (MOEAs). While MOEAs are well-suited to handle large, intractable design spaces, they typically require thousands of expensive simulations, which is prohibitively expensive for the problems under study. Therefore, the use of surrogate models in multiobjective optimization, denoted as multiobjective surrogate-based optimization, may prove to be even more worthwhile than SBO methods to expedite the optimization of computational expensive systems. In this paper, the authors propose the efficient multiobjective optimization (EMO) algorithm which uses Kriging models and multiobjective versions of the probability of improvement and expected improvement criteria to identify the Pareto front with a minimal number of expensive simulations. The EMO algorithm is applied on multiple standard benchmark problems and compared against the well-known NSGA-II, SPEA2 and SMS-EMOA multiobjective optimization methods

    The Hybridization of Branch and Bound with Metaheuristics for Nonconvex Multiobjective Optimization

    Full text link
    A hybrid framework combining the branch and bound method with multiobjective evolutionary algorithms is proposed for nonconvex multiobjective optimization. The hybridization exploits the complementary character of the two optimization strategies. A multiobjective evolutionary algorithm is intended for inducing tight lower and upper bounds during the branch and bound procedure. Tight bounds such as the ones derived in this way can reduce the number of subproblems that have to be solved. The branch and bound method guarantees the global convergence of the framework and improves the search capability of the multiobjective evolutionary algorithm. An implementation of the hybrid framework considering NSGA-II and MOEA/D-DE as multiobjective evolutionary algorithms is presented. Numerical experiments verify the hybrid algorithms benefit from synergy of the branch and bound method and multiobjective evolutionary algorithms

    Decomposition-Based Multiobjective Optimization for Constrained Evolutionary Optimization

    Get PDF
    Pareto dominance-based multiobjective optimization has been successfully applied to constrained evolutionary optimization during the last two decades. However, as another famous multiobjective optimization framework, decomposition-based multiobjective optimization has not received sufficient attention from constrained evolutionary optimization. In this paper, we make use of decomposition-based multiobjective optimization to solve constrained optimization problems (COPs). In our method, first of all, a COP is transformed into a biobjective optimization problem (BOP). Afterward, the transformed BOP is decomposed into a number of scalar optimization subproblems. After generating an offspring for each subproblem by differential evolution, the weighted sum method is utilized for selection. In addition, to make decomposition-based multiobjective optimization suit the characteristics of constrained evolutionary optimization, weight vectors are elaborately adjusted. Moreover, for some extremely complicated COPs, a restart strategy is introduced to help the population jump out of a local optimum in the infeasible region. Extensive experiments on three sets of benchmark test functions, namely, 24 test functions from IEEE CEC2006, 36 test functions from IEEE CEC2010, and 56 test functions from IEEE CEC2017, have demonstrated that the proposed method shows better or at least competitive performance against other state-of-the-art methods

    A Unified Model for Evolutionary Multiobjective Optimization and its Implementation in a General Purpose Software Framework: ParadisEO-MOEO

    Get PDF
    This paper gives a concise overview of evolutionary algorithms for multiobjective optimization. A substantial number of evolutionary computation methods for multiobjective problem solving has been proposed so far, and an attempt of unifying existing approaches is here presented. Based on a fine-grained decomposition and following the main issues of fitness assignment, diversity preservation and elitism, a conceptual global model is proposed and is validated by regarding a number of state-of-the-art algorithms as simple variants of the same structure. The presented model is then incorporated into a general-purpose software framework dedicated to the design and the implementation of evolutionary multiobjective optimization techniques: ParadisEO-MOEO. This package has proven its validity and flexibility by enabling the resolution of many real-world and hard multiobjective optimization problems

    Stochastic Fractal Based Multiobjective Fruit Fly Optimization

    Get PDF
    The fruit fly optimization algorithm (FOA) is a global optimization algorithm inspired by the foraging behavior of a fruit fly swarm. In this study, a novel stochastic fractal model based fruit fly optimization algorithm is proposed for multiobjective optimization. A food source generating method based on a stochastic fractal with an adaptive parameter updating strategy is introduced to improve the convergence performance of the fruit fly optimization algorithm. To deal with multiobjective optimization problems, the Pareto domination concept is integrated into the selection process of fruit fly optimization and a novel multiobjective fruit fly optimization algorithm is then developed. Similarly to most of other multiobjective evolutionary algorithms (MOEAs), an external elitist archive is utilized to preserve the nondominated solutions found so far during the evolution, and a normalized nearest neighbor distance based density estimation strategy is adopted to keep the diversity of the external elitist archive. Eighteen benchmarks are used to test the performance of the stochastic fractal based multiobjective fruit fly optimization algorithm (SFMOFOA). Numerical results show that the SFMOFOA is able to well converge to the Pareto fronts of the test benchmarks with good distributions. Compared with four state-of-the-art methods, namely, the non-dominated sorting generic algorithm (NSGA-II), the strength Pareto evolutionary algorithm (SPEA2), multi-objective particle swarm optimization (MOPSO), and multiobjective self-adaptive differential evolution (MOSADE), the proposed SFMOFOA has better or competitive multiobjective optimization performance

    Stochastic Fractal Based Multiobjective Fruit Fly Optimization

    Get PDF
    The fruit fly optimization algorithm (FOA) is a global optimization algorithm inspired by the foraging behavior of a fruit fly swarm. In this study, a novel stochastic fractal model based fruit fly optimization algorithm is proposed for multiobjective optimization. A food source generating method based on a stochastic fractal with an adaptive parameter updating strategy is introduced to improve the convergence performance of the fruit fly optimization algorithm. To deal with multiobjective optimization problems, the Pareto domination concept is integrated into the selection process of fruit fly optimization and a novel multiobjective fruit fly optimization algorithm is then developed. Similarly to most of other multiobjective evolutionary algorithms (MOEAs), an external elitist archive is utilized to preserve the nondominated solutions found so far during the evolution, and a normalized nearest neighbor distance based density estimation strategy is adopted to keep the diversity of the external elitist archive. Eighteen benchmarks are used to test the performance of the stochastic fractal based multiobjective fruit fly optimization algorithm (SFMOFOA). Numerical results show that the SFMOFOA is able to well converge to the Pareto fronts of the test benchmarks with good distributions. Compared with four state-of-the-art methods, namely, the non-dominated sorting generic algorithm (NSGA-II), the strength Pareto evolutionary algorithm (SPEA2), multi-objective particle swarm optimization (MOPSO), and multiobjective self-adaptive differential evolution (MOSADE), the proposed SFMOFOA has better or competitive multiobjective optimization performance
    corecore