1,427 research outputs found

    Towards efficient multiobjective optimization: multiobjective statistical criterions

    Get PDF
    The use of Surrogate Based Optimization (SBO) is widely spread in engineering design to reduce the number of computational expensive simulations. However, "real-world" problems often consist of multiple, conflicting objectives leading to a set of equivalent solutions (the Pareto front). The objectives are often aggregated into a single cost function to reduce the computational cost, though a better approach is to use multiobjective optimization methods to directly identify a set of Pareto-optimal solutions, which can be used by the designer to make more efficient design decisions (instead of making those decisions upfront). Most of the work in multiobjective optimization is focused on MultiObjective Evolutionary Algorithms (MOEAs). While MOEAs are well-suited to handle large, intractable design spaces, they typically require thousands of expensive simulations, which is prohibitively expensive for the problems under study. Therefore, the use of surrogate models in multiobjective optimization, denoted as MultiObjective Surrogate-Based Optimization (MOSBO), may prove to be even more worthwhile than SBO methods to expedite the optimization process. In this paper, the authors propose the Efficient Multiobjective Optimization (EMO) algorithm which uses Kriging models and multiobjective versions of the expected improvement and probability of improvement criterions to identify the Pareto front with a minimal number of expensive simulations. The EMO algorithm is applied on multiple standard benchmark problems and compared against the well-known NSGA-II and SPEA2 multiobjective optimization methods with promising results

    A constrained multi-objective surrogate-based optimization algorithm

    Get PDF
    Surrogate models or metamodels are widely used in the realm of engineering for design optimization to minimize the number of computationally expensive simulations. Most practical problems often have conflicting objectives, which lead to a number of competing solutions which form a Pareto front. Multi-objective surrogate-based constrained optimization algorithms have been proposed in literature, but handling constraints directly is a relatively new research area. Most algorithms proposed to directly deal with multi-objective optimization have been evolutionary algorithms (Multi-Objective Evolutionary Algorithms -MOEAs). MOEAs can handle large design spaces but require a large number of simulations, which might be infeasible in practice, especially if the constraints are expensive. A multi-objective constrained optimization algorithm is presented in this paper which makes use of Kriging models, in conjunction with multi-objective probability of improvement (PoI) and probability of feasibility (PoF) criteria to drive the sample selection process economically. The efficacy of the proposed algorithm is demonstrated on an analytical benchmark function, and the algorithm is then used to solve a microwave filter design optimization problem

    Multiobjective Stochastic Optimization of Dividing-wall Distillation Columns Using a Surrogate Model Based on Neural Networks

    Get PDF
    Surrogate models have been used for modelling and optimization of conventional chemical processes; among them, neural networks have a great potential to capture complex problems such as those found in chemical processes. However, the development of intensified processes has brought about important challenges in modelling and optimization, due to more complex interrelation between design variables. Among intensified processes, dividing-wall columns represent an interesting alternative for fluid mixtures separation, allowing savings in space requirements, energy and investments costs, in comparison with conventional sequences. In this work, we propose the optimization of dividing-wall columns, with a multiobjective genetic algorithm, through the use of neural networks as surrogate models. The contribution of this work is focused on the evaluation of both objectives and constraints functions with neural networks. The results show a significant reduction in computational time and the number of evaluations of objectives and constraints functions required to reaching the Pareto front

    Automatic surrogate model type selection during the optimization of expensive black-box problems

    Get PDF
    The use of Surrogate Based Optimization (SBO) has become commonplace for optimizing expensive black-box simulation codes. A popular SBO method is the Efficient Global Optimization (EGO) approach. However, the performance of SBO methods critically depends on the quality of the guiding surrogate. In EGO the surrogate type is usually fixed to Kriging even though this may not be optimal for all problems. In this paper the authors propose to extend the well-known EGO method with an automatic surrogate model type selection framework that is able to dynamically select the best model type (including hybrid ensembles) depending on the data available so far. Hence, the expected improvement criterion will always be based on the best approximation available at each step of the optimization process. The approach is demonstrated on a structural optimization problem, i.e., reducing the stress on a truss-like structure. Results show that the proposed algorithm consequently finds better optimums than traditional kriging-based infill optimization

    Optimization. An attempt at describing the State of the Art

    Get PDF
    This paper is an attempt at describing the State of the Art of the vast field of continuous optimization. We will survey deterministic and stochastic methods as well as hybrid approaches in their application to single objective and multiobjective optimization. We study the parameters of optimization algorithms and possibilities for tuning them. Finally, we discuss several methods for using approximate models for computationally expensive problems

    Agent based simulation to optimise emergency departments

    Get PDF
    Nowadays, many of the health care systems are large and complex environments and quite dynamic, specifically Emergency Departments, EDs. It is opened and working 24 hours per day throughout the year with limited resources, whereas it is overcrowded. Thus, is mandatory to simulate EDs to improve qualitatively and quantitatively their performance. This improvement can be achieved modelling and simulating EDs using Agent-Based Model, ABM and optimising many different staff scenarios. This work optimises the staff configuration of an ED. In order to do optimisation, objective functions to minimise or maximise have to be set. One of those objective functions is to find the best or optimum staff configuration that minimise patient waiting time. The staff configuration comprises: doctors, triage nurses, and admissions, the amount and sort of them. Staff configuration is a combinatorial problem, that can take a lot of time to be solved. HPC is used to run the experiments, and encouraging results were obtained. However, even with the basic ED used in this work the search space is very large, thus, when the problem size increases, it is going to need more resources of processing in order to obtain results in an acceptable time

    A portfolio approach to massively parallel Bayesian optimization

    Full text link
    One way to reduce the time of conducting optimization studies is to evaluate designs in parallel rather than just one-at-a-time. For expensive-to-evaluate black-boxes, batch versions of Bayesian optimization have been proposed. They work by building a surrogate model of the black-box that can be used to select the designs to evaluate efficiently via an infill criterion. Still, with higher levels of parallelization becoming available, the strategies that work for a few tens of parallel evaluations become limiting, in particular due to the complexity of selecting more evaluations. It is even more crucial when the black-box is noisy, necessitating more evaluations as well as repeating experiments. Here we propose a scalable strategy that can keep up with massive batching natively, focused on the exploration/exploitation trade-off and a portfolio allocation. We compare the approach with related methods on deterministic and noisy functions, for mono and multiobjective optimization tasks. These experiments show similar or better performance than existing methods, while being orders of magnitude faster

    Multiobjective Design Optimization Of Gas Turbine Blade With Emphasis On Internal Cooling

    Get PDF
    In the design of mechanical components, numerical simulations and experimental methods are commonly used for design creation (or modification) and design optimization. However, a major challenge of using simulation and experimental methods is that they are timeconsuming and often cost-prohibitive for the designer. In addition, the simultaneous interactions between aerodynamic, thermodynamic and mechanical integrity objectives for a particular component or set of components are difficult to accurately characterize, even with the existing simulation tools and experimental methods. The current research and practice of using numerical simulations and experimental methods do little to address the simultaneous “satisficing” of multiple and often conflicting design objectives that influence the performance and geometry of a component. This is particularly the case for gas turbine systems that involve a large number of complex components with complicated geometries. Numerous experimental and numerical studies have demonstrated success in generating effective designs for mechanical components; however, their focus has been primarily on optimizing a single design objective based on a limited set of design variables and associated values. In this research, a multiobjective design optimization framework to solve a set of userspecified design objective functions for mechanical components is proposed. The framework integrates a numerical simulation and a nature-inspired optimization procedure that iteratively perturbs a set of design variables eventually converging to a set of tradeoff design solutions. In this research, a gas turbine engine system is used as the test application for the proposed framework. More specifically, the optimization of the gas turbine blade internal cooling channel configuration is performed. This test application is quite relevant as gas turbine engines serve a iv critical role in the design of the next-generation power generation facilities around the world. Furthermore, turbine blades require better cooling techniques to increase their cooling effectiveness to cope with the increase in engine operating temperatures extending the useful life of the blades. The performance of the proposed framework is evaluated via a computational study, where a set of common, real-world design objectives and a set of design variables that directly influence the set of objectives are considered. Specifically, three objectives are considered in this study: (1) cooling channel heat transfer coefficient, which measures the rate of heat transfer and the goal is to maximize this value; (2) cooling channel air pressure drop, where the goal is to minimize this value; and (3) cooling channel geometry, specifically the cooling channel cavity area, where the goal is to maximize this value. These objectives, which are conflicting, directly influence the cooling effectiveness of a gas turbine blade and the material usage in its design. The computational results show the proposed optimization framework is able to generate, evaluate and identify thousands of competitive tradeoff designs in a fraction of the time that it would take designers using the traditional simulation tools and experimental methods commonly used for mechanical component design generation. This is a significant step beyond the current research and applications of design optimization to gas turbine blades, specifically, and to mechanical components, in general

    A Bayesian Approach to Computer Model Calibration and Model-Assisted Design

    Get PDF
    Computer models of phenomena that are difficult or impossible to study directly are critical for enabling research and assisting design in many areas. In order to be effective, computer models must be calibrated so that they accurately represent the modeled phenomena. There exists a rich variety of methods for computer model calibration that have been developed in recent decades. Among the desiderata of such methods is a means of quantifying remaining uncertainty after calibration regarding both the values of the calibrated model inputs and the model outputs. Bayesian approaches to calibration have met this need in recent decades. However, limitations remain. Whereas in model calibration one finds point estimates or distributions of calibration inputs in order to induce the model to reflect reality accurately, interest in a computer model often centers primarily on its use for model-assisted design, in which the goal is to find values for design inputs to induce the modeled system to approximate some target outcome. Existing Bayesian approaches are limited to the first of these two tasks. The present work develops an approach adapting Bayesian methods for model calibration for application in model-assisted design. The approach retains the benefits of Bayesian calibration in accounting for and quantifying all sources of uncertainty. It is capable of generating a comprehensive assessment of the Pareto optimal inputs for a multi-objective optimization problem. The present work shows that this approach can apply as a method for model-assisted design using a previously calibrated system, and can also serve as a method for model-assisted design using a model that still requires calibration, accomplishing both ends simultaneously
    corecore