1,578 research outputs found

    Sequential optimization of strip bending process using multiquadric radial basis function surrogate models

    Get PDF
    Surrogate models are used within the sequential optimization strategy for forming processes. A sequential improvement (SI) scheme is used to refine the surrogate model in the optimal region. One of the popular surrogate modeling methods for SI is Kriging. However, the global response of Kriging models deteriorates in some cases due to local model refinement within SI. This may be problematic for multimodal optimization problems and for other applications where correct prediction of the global response is needed. In this paper the deteriorating global behavior of the Kriging surrogate modeling technique is shown for a model of a strip bending process. It is shown that a Radial Basis Function (RBF) surrogate model with Multiquadric (MQ) basis functions performs equally well in terms of optimization efficiency and better in terms of global predictive accuracy. The local point density is taken into account in the model formulatio

    Metamodel variability analysis combining bootstrapping and validation techniques

    Get PDF
    Research on metamodel-based optimization has received considerably increasing interest in recent years, and has found successful applications in solving computationally expensive problems. The joint use of computer simulation experiments and metamodels introduces a source of uncertainty that we refer to as metamodel variability. To analyze and quantify this variability, we apply bootstrapping to residuals derived as prediction errors computed from cross-validation. The proposed method can be used with different types of metamodels, especially when limited knowledge on parameters’ distribution is available or when a limited computational budget is allowed. Our preliminary experiments based on the robust version of the EOQ model show encouraging results

    Robust optimization of a 2D air conditioning duct using kriging

    Get PDF
    The design of systems involving fluid flows is typically based on computationally intensive Computational Fluid Dynamics (CFD) simulations. Kriging based optimization methods, especially the Efficient Global Optimization (EGO) algorithm, are now often used to solve deterministic optimization problems involving such expensive models. When the design accounts for uncertainties, the optimization is usually based on double loop approaches where the uncertainty propagation (e.g., Monte Carlo simulations, reliability index calculation) is recursively performed inside the optimization iterations. We have proposed in a previous work a single loop kriging based method for minimizing the mean of an objective function: simulations points are calculated in order to simultaneously propagate uncertainties, i.e., estimate the mean objective function, and optimize this mean. In this report this method has been applied to the shape optimization of a 2D air conditioning duct. For comparison purposes, deterministic designs were first obtained by the EGO algorithm. Very high performance designs were obtained, but they are also very sensitive to numerical model parameters such as mesh size, which suggests a bad consistency between the physics and the numerical model. The 2D duct test case has then been reformulated by introducing shape uncertainties. The mean of the duct performance criteria with respect to shape uncertainties has been maximized with the simultaneous optimization and sampling method. The solutions found were not only robust to shape uncertainties but also to the CFD model numerical parameters. These designs show that the method is of practical interest in engineering tasks

    A Tutorial on Bayesian Optimization of Expensive Cost Functions, with Application to Active User Modeling and Hierarchical Reinforcement Learning

    Full text link
    We present a tutorial on Bayesian optimization, a method of finding the maximum of expensive cost functions. Bayesian optimization employs the Bayesian technique of setting a prior over the objective function and combining it with evidence to get a posterior function. This permits a utility-based selection of the next observation to make on the objective function, which must take into account both exploration (sampling from areas of high uncertainty) and exploitation (sampling areas likely to offer improvement over the current best observation). We also present two detailed extensions of Bayesian optimization, with experiments---active user modelling with preferences, and hierarchical reinforcement learning---and a discussion of the pros and cons of Bayesian optimization based on our experiences
    • …
    corecore