20,575 research outputs found

    Evolutionary Algorithms for

    Get PDF
    Many real-world problems involve two types of problem difficulty: i) multiple, conflicting objectives and ii) a highly complex search space. On the one hand, instead of a single optimal solution competing goals give rise to a set of compromise solutions, generally denoted as Pareto-optimal. In the absence of preference information, none of the corresponding trade-offs can be said to be better than the others. On the other hand, the search space can be too large and too complex to be solved by exact methods. Thus, efficient optimization strategies are required that are able to deal with both difficulties. Evolutionary algorithms possess several characteristics that are desirable for this kind of problem and make them preferable to classical optimization methods. In fact, various evolutionary approaches to multiobjective optimization have been proposed since 1985, capable of searching for multiple Paretooptimal solutions concurrently in a single simulation run. However, in spite of this variety, there is a lack of extensive comparative studies in the literature. Therefore, it has remained open up to now

    Scalarizing Functions in Bayesian Multiobjective Optimization

    Get PDF
    Scalarizing functions have been widely used to convert a multiobjective optimization problem into a single objective optimization problem. However, their use in solving (computationally) expensive multi- and many-objective optimization problems in Bayesian multiobjective optimization is scarce. Scalarizing functions can play a crucial role on the quality and number of evaluations required when doing the optimization. In this article, we study and review 15 different scalarizing functions in the framework of Bayesian multiobjective optimization and build Gaussian process models (as surrogates, metamodels or emulators) on them. We use expected improvement as infill criterion (or acquisition function) to update the models. In particular, we compare different scalarizing functions and analyze their performance on several benchmark problems with different number of objectives to be optimized. The review and experiments on different functions provide useful insights when using and selecting a scalarizing function when using a Bayesian multiobjective optimization method

    Towards efficient multiobjective optimization: multiobjective statistical criterions

    Get PDF
    The use of Surrogate Based Optimization (SBO) is widely spread in engineering design to reduce the number of computational expensive simulations. However, "real-world" problems often consist of multiple, conflicting objectives leading to a set of equivalent solutions (the Pareto front). The objectives are often aggregated into a single cost function to reduce the computational cost, though a better approach is to use multiobjective optimization methods to directly identify a set of Pareto-optimal solutions, which can be used by the designer to make more efficient design decisions (instead of making those decisions upfront). Most of the work in multiobjective optimization is focused on MultiObjective Evolutionary Algorithms (MOEAs). While MOEAs are well-suited to handle large, intractable design spaces, they typically require thousands of expensive simulations, which is prohibitively expensive for the problems under study. Therefore, the use of surrogate models in multiobjective optimization, denoted as MultiObjective Surrogate-Based Optimization (MOSBO), may prove to be even more worthwhile than SBO methods to expedite the optimization process. In this paper, the authors propose the Efficient Multiobjective Optimization (EMO) algorithm which uses Kriging models and multiobjective versions of the expected improvement and probability of improvement criterions to identify the Pareto front with a minimal number of expensive simulations. The EMO algorithm is applied on multiple standard benchmark problems and compared against the well-known NSGA-II and SPEA2 multiobjective optimization methods with promising results

    Fast calculation of multiobjective probability of improvement and expected improvement criteria for Pareto optimization

    Get PDF
    The use of surrogate based optimization (SBO) is widely spread in engineering design to reduce the number of computational expensive simulations. However, "real-world" problems often consist of multiple, conflicting objectives leading to a set of competitive solutions (the Pareto front). The objectives are often aggregated into a single cost function to reduce the computational cost, though a better approach is to use multiobjective optimization methods to directly identify a set of Pareto-optimal solutions, which can be used by the designer to make more efficient design decisions (instead of weighting and aggregating the costs upfront). Most of the work in multiobjective optimization is focused on multiobjective evolutionary algorithms (MOEAs). While MOEAs are well-suited to handle large, intractable design spaces, they typically require thousands of expensive simulations, which is prohibitively expensive for the problems under study. Therefore, the use of surrogate models in multiobjective optimization, denoted as multiobjective surrogate-based optimization, may prove to be even more worthwhile than SBO methods to expedite the optimization of computational expensive systems. In this paper, the authors propose the efficient multiobjective optimization (EMO) algorithm which uses Kriging models and multiobjective versions of the probability of improvement and expected improvement criteria to identify the Pareto front with a minimal number of expensive simulations. The EMO algorithm is applied on multiple standard benchmark problems and compared against the well-known NSGA-II, SPEA2 and SMS-EMOA multiobjective optimization methods

    A modified migration model biogeography evolutionary approach for electromagnetic device multiobjective optimization

    Get PDF
    Inthispaper, we present anefficient androbust algorithm for multiobjective optimization of electromagnetic devices.Therecentlydeveloped biogeography-based optimization (BBO) is modified byadapting its migration model function so as to improve its convergence.The proposed Modified Migration Model biogeography-based optimization (MMMBBO) algorithm is applied into the optimal geometrical design of an electromagnetic actuator. This multiobjective optimization problem is solved by maximizing the output force as well as minimizing the total weight of the actuator. The comparison between the optimization results using BBO and MMMBBO shows the superiority of the proposed approach

    Dynamic multiobjective optimization problems: test cases, approximations, and applications

    Get PDF
    After demonstrating adequately the usefulness of evolutionary multiobjective optimization (EMO) algorithms in finding multiple Pareto-optimal solutions for static multiobjective optimization problems, there is now a growing need for solving dynamic multiobjective optimization problems in a similar manner. In this paper, we focus on addressing this issue by developing a number of test problems and by suggesting a baseline algorithm. Since in a dynamic multiobjective optimization problem, the resulting Pareto-optimal set is expected to change with time (or, iteration of the optimization process), a suite of five test problems offering different patterns of such changes and different difficulties in tracking the dynamic Pareto-optimal front by a multiobjective optimization algorithm is presented. Moreover, a simple example of a dynamic multiobjective optimization problem arising from a dynamic control loop is presented. An extension to a previously proposed direction-based search method is proposed for solving such problems and tested on the proposed test problems. The test problems introduced in this paper should encourage researchers interested in multiobjective optimization and dynamic optimization problems to develop more efficient algorithms in the near future

    A dynamic gradient approach to Pareto optimization with nonsmooth convex objective functions

    Full text link
    In a general Hilbert framework, we consider continuous gradient-like dynamical systems for constrained multiobjective optimization involving non-smooth convex objective functions. Our approach is in the line of a previous work where was considered the case of convex di erentiable objective functions. Based on the Yosida regularization of the subdi erential operators involved in the system, we obtain the existence of strong global trajectories. We prove a descent property for each objective function, and the convergence of trajectories to weak Pareto minima. This approach provides a dynamical endogenous weighting of the objective functions. Applications are given to cooperative games, inverse problems, and numerical multiobjective optimization

    Box-constrained vector optimization: a steepest descent method without “a priori” scalarization

    Get PDF
    In this paper a notion of descent direction for a vector function defined on a box is introduced. This concept is based on an appropriate convex combination of the “projected” gradients of the components of the objective functions. The proposed approach does not involve an “apriori” scalarization since the coefficients of the convex combination of the projected gradients are the solutions of a suitable minimization problem depending on the feasible point considered. Subsequently, the descent directions are considered in the formulation of a first order optimality condition for Pareto optimality in a box-constrained multiobjective optimization problem. Moreover, a computational method is proposed to solve box-constrained multiobjective optimization problems. This method determines the critical points of the box constrained multiobjective optimization problem following the trajectories defined through the descent directions mentioned above. The convergence of the method to the critical points is proved. The numerical experience shows that the computational method efficiently determines the whole local Pareto front.Multi-objective optimization problems, path following methods, dynamical systems, minimal selection.

    Nonemptiness and Compactness of Solutions Set for Nondifferentiable Multiobjective Optimization Problems

    Get PDF
    A nondifferentiable multiobjective optimization problem with nonempty set constraints is considered, and the equivalence of weakly efficient solutions, the critical points for the nondifferentiable multiobjective optimization problems, and solutions for vector variational-like inequalities is established under some suitable conditions. Nonemptiness and compactness of the solutions set for the nondifferentiable multiobjective optimization problems are proved by using the FKKM theorem and a fixed-point theorem
    corecore