3,292 research outputs found

    Comparando Algoritmos Evolutivos Baseados em Decomposição para Problemas de Otimização Multiobjetivo e com Muitos Objetivos

    Get PDF
    Many real-world problems can be mathematically modeled as Multiobjective Optimization Problems (MOPs), as they involve multiple conflicting objective functions that must be minimized simultaneously. MOPs with more than 3 objective functions are called Many-objective Optimization Problems (MaOPs). MOPs are typically solved through Multiobjective Evolutionary Algorithms (MOEAs), which can obtain a set of non-dominated optimal solutions, known as a Pareto front, in a single run. The MOEA Based on Decomposition (MOEA/D) is one of the most efficient, dividing a MOP into several single-objective subproblems and optimizing them simultaneously. This study evaluated the performance of MOEA/D and four variants representing the state of the art in the literature (MOEA/DD, MOEA/D-DE, MOEA/D-DU, and MOEA/D-AWA) in MOPs and MaOPs. Computational experiments were conducted using benchmark MOPs and MaOPs from the DTLZ suite considering 3 and 5 objective functions. Additionally, a statistical analysis, including the Wilcoxon test, was performed to evaluate the results obtained in the IGD+ performance indicator. The Hypervolume performance indicator was also considered in the combined Pareto front, formed by all solutions obtained by each MOEA. The experiments revealed that MOEA/DD performed best in IGD+, and MOEA/D-AWA achieved the highest Hypervolume in the combined Pareto front, while MOEA/D-DE registered the worst result in this set of problems.Muitos problemas oriundos do mundo real podem ser modelados matematicamente como Problemas de Otimização Multiobjetivo (POMs), já que possuem diversas funções objetivo conflitantes entre si que devem ser minimizadas simultaneamente. POMs com mais de 3 funções objetivo recebem o nome de Problemas de Otimização com Muitos Objetivos (MaOPs, do inglês Many-objective Optimization Problems). Os POMs geralmente são resolvidos através de Algoritmos Evolutivos Multiobjetivos (MOEAs, do inglês Multiobjective Evolutionary Algorithms), os quais conseguem obter um conjunto de soluções ótimas não dominadas entre si, conhecidos como frente de Pareto, em uma única execução. O MOEA baseado em decomposição (MOEA/D) é um dos mais eficientes, o qual divide um POM em vários subproblemas monobjetivos otimizando-os ao mesmo tempo. Neste estudo foi realizada uma avaliação dos desempenhos do MOEA/D e quatro de suas variantes que representam o estado da arte da literatura (MOEA/DD, MOEA/D-DE, MOEA/D-DU e MOEA/D-AWA) em POMs e MaOPs. Foram conduzidos experimentos computacionais utilizando POMs e MaOPs benchmark do suite DTLZ considerando 3 e 5 funções objetivo. Além disso, foi realizada uma análise estatística que incluiu o teste de Wilcoxon para avaliar os resultados obtidos no indicador de desempenho IGD+. Também foi considerado o indicador de desempenho Hypervolume na frente de Pareto combinada, que é formada por todas as soluções obtidas por cada MOEA. Os experimentos revelaram que o MOEA/DD apresentou a melhor performance no IGD+ e o MOEA/D-AWA obteve o maior Hypervolume na frente de Pareto combinada, enquanto o MOEA/D-DE registrou o pior resultado nesse conjunto de problemas

    A survey on handling computationally expensive multiobjective optimization problems with evolutionary algorithms

    Get PDF
    This is the author accepted manuscript. The final version is available from Springer Verlag via the DOI in this record.Evolutionary algorithms are widely used for solving multiobjective optimization problems but are often criticized because of a large number of function evaluations needed. Approximations, especially function approximations, also referred to as surrogates or metamodels are commonly used in the literature to reduce the computation time. This paper presents a survey of 45 different recent algorithms proposed in the literature between 2008 and 2016 to handle computationally expensive multiobjective optimization problems. Several algorithms are discussed based on what kind of an approximation such as problem, function or fitness approximation they use. Most emphasis is given to function approximation-based algorithms. We also compare these algorithms based on different criteria such as metamodeling technique and evolutionary algorithm used, type and dimensions of the problem solved, handling constraints, training time and the type of evolution control. Furthermore, we identify and discuss some promising elements and major issues among algorithms in the literature related to using an approximation and numerical settings used. In addition, we discuss selecting an algorithm to solve a given computationally expensive multiobjective optimization problem based on the dimensions in both objective and decision spaces and the computation budget available.The research of Tinkle Chugh was funded by the COMAS Doctoral Program (at the University of Jyväskylä) and FiDiPro Project DeCoMo (funded by Tekes, the Finnish Funding Agency for Innovation), and the research of Dr. Karthik Sindhya was funded by SIMPRO project funded by Tekes as well as DeCoMo

    Efficient multiobjective optimization employing Gaussian processes, spectral sampling and a genetic algorithm

    Get PDF
    Many engineering problems require the optimization of expensive, black-box functions involving multiple conflicting criteria, such that commonly used methods like multiobjective genetic algorithms are inadequate. To tackle this problem several algorithms have been developed using surrogates. However, these often have disadvantages such as the requirement of a priori knowledge of the output functions or exponentially scaling computational cost with respect to the number of objectives. In this paper a new algorithm is proposed, TSEMO, which uses Gaussian processes as surrogates. The Gaussian processes are sampled using spectral sampling techniques to make use of Thompson sampling in conjunction with the hypervolume quality indicator and NSGA-II to choose a new evaluation point at each iteration. The reference point required for the hypervolume calculation is estimated within TSEMO. Further, a simple extension was proposed to carry out batch-sequential design. TSEMO was compared to ParEGO, an expected hypervolume implementation, and NSGA-II on 9 test problems with a budget of 150 function evaluations. Overall, TSEMO shows promising performance, while giving a simple algorithm without the requirement of a priori knowledge, reduced hypervolume calculations to approach linear scaling with respect to the number of objectives, the capacity to handle noise and lastly the ability for batch-sequential usage

    Choice function based hyper-heuristics for multi-objective optimization

    Get PDF
    A selection hyper-heuristic is a high level search methodology which operates over a fixed set of low level heuristics. During the iterative search process, a heuristic is selected and applied to a candidate solution in hand, producing a new solution which is then accepted or rejected at each step. Selection hyper-heuristics have been increasingly, and successfully, applied to single-objective optimization problems, while work on multi-objective selection hyper-heuristics is limited. This work presents one of the initial studies on selection hyper-heuristics combining a choice function heuristic selection methodology with great deluge and late acceptance as non-deterministic move acceptance methods for multi-objective optimization. A well-known hypervolume metric is integrated into the move acceptance methods to enable the approaches to deal with multi-objective problems. The performance of the proposed hyper-heuristics is investigated on the Walking Fish Group test suite which is a common benchmark for multi-objective optimization. Additionally, they are applied to the vehicle crashworthiness design problem as a real-world multi-objective problem. The experimental results demonstrate the effectiveness of the non-deterministic move acceptance, particularly great deluge when used as a component of a choice function based selection hyper-heuristic

    Accelerating Manufacturing Decisions using Bayesian Optimization: An Optimization and Prediction Perspective

    Get PDF
    Manufacturing is a promising technique for producing complex and custom-made parts with a high degree of precision. It can also provide us with desired materials and products with specified properties. To achieve that, it is crucial to find out the optimum point of process parameters that have a significant impact on the properties and quality of the final product. Unfortunately, optimizing these parameters can be challenging due to the complex and nonlinear nature of the underlying process, which becomes more complicated when there are conflicting objectives, sometimes with multiple goals. Furthermore, experiments are usually costly, time-consuming, and require expensive materials, man, and machine hours. So, each experiment is valuable and it\u27s critical to determine the optimal experiment location to gain the most comprehensive understanding of the process. Sequential learning is a promising approach to actively learn from the ongoing experiments, iteratively update the underlying optimization routine, and adapt the data collection process on the go. This thesis presents a multi-objective Bayesian optimization framework to find out the optimum processing conditions for a manufacturing setup. It uses an acquisition function to collect data points sequentially and iteratively update its understanding of the underlying design space utilizing a Gaussian Process-based surrogate model. In manufacturing processes, the focus is often on obtaining a rough understanding of the design space using minimal experimentation, rather than finding the optimal parameters. This falls under the category of approximating the underlying function rather than design optimization. This approach can provide material scientists or manufacturing engineers with a comprehensive view of the entire design space, increasing the likelihood of making discoveries or making robust decisions. However, a precise and reliable prediction model is necessary for a good approximation. To meet this requirement, this thesis proposes an epsilon-greedy sequential prediction framework that is distinct from the optimization framework. The data acquisition strategy has been refined to balance exploration and exploitation, and a threshold has been established to determine when to switch between the two. The performance of this proposed optimization and prediction framework is evaluated using real-life datasets against the traditional design of experiments. The proposed frameworks have generated effective optimization and prediction results using fewer experiments

    Hybrid of memory andprediction strategies for dynamic multiobjective optimization

    Get PDF
    The file attached to this record is the author's final peer reviewed version. The Publisher's final version can be found by following the DOI link.Dynamic multiobjective optimization problems (DMOPs) are characterized by a time-variant Pareto optimal front (PF) and/or Pareto optimal set (PS). To handle DMOPs, an algorithm should be able to track the movement of the PF/PS over time efficiently. In this paper, a novel dynamic multiobjective evolutionary algorithm (DMOEA) is proposed for solving DMOPs, which includes a hybrid of memory and prediction strategies (HMPS) and the multiobjective evolutionary algorithm based on decomposition (MOEA/D). In particular, the resultant algorithm (MOEA/D-HMPS) detects environmental changes and identifies the similarity of a change to the historical changes, based on which two different response strategies are applied. If a detected change is dissimilar to any historical changes, a differential prediction based on the previous two consecutive population centers is utilized to relocate the population individuals in the new environment; otherwise, a memory-based technique devised to predict the new locations of the population members is applied. Both response mechanisms mix a portion of existing solutions with randomly generated solutions to alleviate the effect of prediction errors caused by sharp or irregular changes. MOEA/D-HMPS was tested on 14 benchmark problems and compared with state-of-the-art DMOEAs. The experimental results demonstrate the efficiency of MOEA/D-HMPS in solving various DMOPs

    A Data Driven Sequential Learning Framework to Accelerate and Optimize Multi-Objective Manufacturing Decisions

    Full text link
    Manufacturing advanced materials and products with a specific property or combination of properties is often warranted. To achieve that it is crucial to find out the optimum recipe or processing conditions that can generate the ideal combination of these properties. Most of the time, a sufficient number of experiments are needed to generate a Pareto front. However, manufacturing experiments are usually costly and even conducting a single experiment can be a time-consuming process. So, it's critical to determine the optimal location for data collection to gain the most comprehensive understanding of the process. Sequential learning is a promising approach to actively learn from the ongoing experiments, iteratively update the underlying optimization routine, and adapt the data collection process on the go. This paper presents a novel data-driven Bayesian optimization framework that utilizes sequential learning to efficiently optimize complex systems with multiple conflicting objectives. Additionally, this paper proposes a novel metric for evaluating multi-objective data-driven optimization approaches. This metric considers both the quality of the Pareto front and the amount of data used to generate it. The proposed framework is particularly beneficial in practical applications where acquiring data can be expensive and resource intensive. To demonstrate the effectiveness of the proposed algorithm and metric, the algorithm is evaluated on a manufacturing dataset. The results indicate that the proposed algorithm can achieve the actual Pareto front while processing significantly less data. It implies that the proposed data-driven framework can lead to similar manufacturing decisions with reduced costs and time
    corecore