378 research outputs found

    Parallel surrogate-assisted global optimization with expensive functions – a survey

    Get PDF
    Surrogate assisted global optimization is gaining popularity. Similarly, modern advances in computing power increasingly rely on parallelization rather than faster processors. This paper examines some of the methods used to take advantage of parallelization in surrogate based global optimization. A key issue focused on in this review is how different algorithms balance exploration and exploitation. Most of the papers surveyed are adaptive samplers that employ Gaussian Process or Kriging surrogates. These allow sophisticated approaches for balancing exploration and exploitation and even allow to develop algorithms with calculable rate of convergence as function of the number of parallel processors. In addition to optimization based on adaptive sampling, surrogate assisted parallel evolutionary algorithms are also surveyed. Beyond a review of the present state of the art, the paper also argues that methods that provide easy parallelization, like multiple parallel runs, or methods that rely on population of designs for diversity deserve more attention.United States. Dept. of Energy (National Nuclear Security Administration. Advanced Simulation and Computing Program. Cooperative Agreement under the Predictive Academic Alliance Program. DE-NA0002378

    Robust optimisation of computationally expensive models using adaptive multi-fidelity emulation

    Get PDF
    Computationally expensive models are increasingly employed in the design process of engineering products and systems. Robust design in particular aims to obtain designs that exhibit near-optimal performance and low variability under uncertainty. Surrogate models are often employed to imitate the behaviour of expensive computational models. Surrogates are trained from a reduced number of samples of the expensive model. A crucial component of the performance of a surrogate is the quality of the training set. Problems occur when sampling fails to obtain points located in an area of interest and/or where the computational budget only allows for a very limited number of runs of the expensive model. This paper employs a Gaussian process emulation approach to perform efficient single-loop robust optimisation of expensive models. The emulator is enhanced to propagate input uncertainty to the emulator output, allowing single-loop robust optimisation. Further, the emulator is trained with multi-fidelity data obtained via adaptive sampling to maximise the quality of the training set for the given computational budget. An illustrative example is presented to highlight how the method works, before it is applied to two industrial case studies

    Uncertainty-Integrated Surrogate Modeling for Complex System Optimization

    Get PDF
    Approximation models such as surrogate models provide a tractable substitute to expensive physical simulations and an effective solution to the potential lack of quantitative models of system behavior. These capabilities not only enable the efficient design of complex systems, but is also essential for the effective analysis of physical phenomena/characteristics in the different domains of Engineering, Material Science, Biomedical Science, and various other disciplines. Since these models provide an abstraction of the real system behavior (often a low-fidelity representative) it is important to quantify the accuracy and the reliability of such approximation models without investing additional expensive system evaluations (simulations or physical experiments). Standard error measures, such as the mean squared error, the cross-validation error, and the Akaike\u27s information criterion however provide limited (often inadequate) information regarding the accuracy of the final surrogate model while other more effective dedicated error measures are tailored towards only one class of surrogate models. This lack of accuracy information and the ability to compare and test diverse surrogate models reduce the confidence in model application, restricts appropriate model selection, and undermines the effectiveness of surrogate-based optimization. A key contribution of this dissertation is the development of a new model-independent approach to quantify the fidelity of a trained surrogate model in a given region of the design domain. This method is called the Predictive Estimation of Model Fidelity (PEMF). The PEMF method is derived from the hypothesis that the accuracy of an approximation model is related to the amount of data resources leveraged to train the model . In PEMF, intermediate surrogate models are iteratively constructed over heuristic subsets of sample points. The median and the maximum errors estimated over the remaining points are used to determine the respective error distributions at each iteration. The estimated modes of the error distributions are represented as functions of the density of intermediate training points through nonlinear regression, assuming a smooth decreasing trend of errors with increasing sample density. These regression functions are then used to predict the expected median and maximum errors in the final surrogate models. It is observed that the model fidelities estimated by PEMF are up to two orders of magnitude more accurate and statistically more stable compared to those based on the popularly-used leave-one-out cross-validation method, when applied to a variety of benchmark problems. By leveraging this new paradigm in quantifying the fidelity of surrogate models, a novel automated surrogate model selection framework is also developed. This PEMF-based model selection framework is called the Concurrent Surrogate Model Selection (COSMOS). COSMOS, unlike existing model selection methods, coherently operates at all the three levels necessary to facilitate optimal selection, i.e., (1) selecting the model type, (2) selecting the kernel function type, and (3) determining the optimal values of the typically user-prescribed parameters. The selection criteria that guide optimal model selection are determined by PEMF and the search process is performed using a MINLP solver. The effectiveness of COSMOS is demonstrated by successfully applying it to different benchmark and practical engineering problems, where it offers a first-of-its-kind globally competitive model selection. In this dissertation, the knowledge about the accuracy of a surrogate estimated using PEMF is applied to also develop a novel model management approach for engineering optimization. This approach adaptively selects computational models (both physics-based models and surrogate models) of differing levels of fidelity and computational cost, to be used during optimization, with the overall objective to yield optimal designs with high-fidelity function estimates at a reasonable computational expense. In this technique, a new adaptive model switching (AMS) metric defined to guide the switching of model from one to the next higher fidelity model during the optimization process. The switching criterion is based on whether the uncertainty associated with the current model output dominates the latest improvement of the relative fitness function, where both the model output uncertainty and the function improvement (across the population) are expressed as probability distributions. This adaptive model switching technique is applied to two practical problems through Particle Swarm Optimization to successfully illustrate: (i) the computational advantage of this method over purely high-fidelity model-based optimization, and (ii) the accuracy advantage of this method over purely low-fidelity model-based optimization. Motivated by the unique capabilities of the model switching concept, a new model refinement approach is also developed in this dissertation. The model refinement approach can be perceived as an adaptive sequential sampling approach applied in surrogate-based optimization. Decisions regarding when to perform additional system evaluations to refine the model is guided by the same model-uncertainty principles as in the adaptive model switching technique. The effectiveness of this new model refinement technique is illustrated through application to practical surrogate-based optimization in the area of energy sustainability

    Cross-validation based adaptive sampling for Gaussian process models

    Get PDF
    In many real-world applications, we are interested in approximating black-box, costly functions as accurately as possible with the smallest number of function evaluations. A complex computer code is an example of such a function. In this work, a Gaussian process (GP) emulator is used to approximate the output of complex computer code. We consider the problem of extending an initial experiment (set of model runs) sequentially to improve the emulator. A sequential sampling approach based on leave-one-out (LOO) cross-validation is proposed that can be easily extended to a batch mode. This is a desirable property since it saves the user time when parallel computing is available. After fitting a GP to training data points, the expected squared LOO (ES-LOO) error is calculated at each design point. ES-LOO is used as a measure to identify important data points. More precisely, when this quantity is large at a point it means that the quality of prediction depends a great deal on that point and adding more samples nearby could improve the accuracy of the GP. As a result, it is reasonable to select the next sample where ES-LOO is maximised. However, ES-LOO is only known at the experimental design and needs to be estimated at unobserved points. To do this, a second GP is fitted to the ES-LOO errors and where the maximum of the modified expected improvement (EI) criterion occurs is chosen as the next sample. EI is a popular acquisition function in Bayesian optimisation and is used to trade-off between local/global search. However, it has a tendency towards exploitation, meaning that its maximum is close to the (current) "best" sample. To avoid clustering, a modified version of EI, called pseudo expected improvement, is employed which is more explorative than EI yet allows us to discover unexplored regions. Our results show that the proposed sampling method is promising
    • …
    corecore