2,258 research outputs found

    Data Driven Surrogate Based Optimization in the Problem Solving Environment WBCSim

    Get PDF
    Large scale, multidisciplinary, engineering designs are always difficult due to the complexity and dimensionality of these problems. Direct coupling between the analysis codes and the optimization routines can be prohibitively time consuming due to the complexity of the underlying simulation codes. One way of tackling this problem is by constructing computationally cheap(er) approximations of the expensive simulations, that mimic the behavior of the simulation model as closely as possible. This paper presents a data driven, surrogate based optimization algorithm that uses a trust region based sequential approximate optimization (SAO) framework and a statistical sampling approach based on design of experiment (DOE) arrays. The algorithm is implemented using techniques from two packages—SURFPACK and SHEPPACK that provide a collection of approximation algorithms to build the surrogates and three different DOE techniques—full factorial (FF), Latin hypercube sampling (LHS), and central composite design (CCD)—are used to train the surrogates. The results are compared with the optimization results obtained by directly coupling an optimizer with the simulation code. The biggest concern in using the SAO framework based on statistical sampling is the generation of the required database. As the number of design variables grows, the computational cost of generating the required database grows rapidly. A data driven approach is proposed to tackle this situation, where the trick is to run the expensive simulation if and only if a nearby data point does not exist in the cumulatively growing database. Over time the database matures and is enriched as more and more optimizations are performed. Results show that the proposed methodology dramatically reduces the total number of calls to the expensive simulation runs during the optimization process

    State-of-the-art in aerodynamic shape optimisation methods

    Get PDF
    Aerodynamic optimisation has become an indispensable component for any aerodynamic design over the past 60 years, with applications to aircraft, cars, trains, bridges, wind turbines, internal pipe flows, and cavities, among others, and is thus relevant in many facets of technology. With advancements in computational power, automated design optimisation procedures have become more competent, however, there is an ambiguity and bias throughout the literature with regards to relative performance of optimisation architectures and employed algorithms. This paper provides a well-balanced critical review of the dominant optimisation approaches that have been integrated with aerodynamic theory for the purpose of shape optimisation. A total of 229 papers, published in more than 120 journals and conference proceedings, have been classified into 6 different optimisation algorithm approaches. The material cited includes some of the most well-established authors and publications in the field of aerodynamic optimisation. This paper aims to eliminate bias toward certain algorithms by analysing the limitations, drawbacks, and the benefits of the most utilised optimisation approaches. This review provides comprehensive but straightforward insight for non-specialists and reference detailing the current state for specialist practitioners

    Automatic surrogate model type selection during the optimization of expensive black-box problems

    Get PDF
    The use of Surrogate Based Optimization (SBO) has become commonplace for optimizing expensive black-box simulation codes. A popular SBO method is the Efficient Global Optimization (EGO) approach. However, the performance of SBO methods critically depends on the quality of the guiding surrogate. In EGO the surrogate type is usually fixed to Kriging even though this may not be optimal for all problems. In this paper the authors propose to extend the well-known EGO method with an automatic surrogate model type selection framework that is able to dynamically select the best model type (including hybrid ensembles) depending on the data available so far. Hence, the expected improvement criterion will always be based on the best approximation available at each step of the optimization process. The approach is demonstrated on a structural optimization problem, i.e., reducing the stress on a truss-like structure. Results show that the proposed algorithm consequently finds better optimums than traditional kriging-based infill optimization

    Inverse modelling of an aneurysm's stiffness using surrogate-based optimization and fluid-structure interaction simulations

    Get PDF
    Characterization of the mechanical properties of arterial tissues is highly relevant. In this work, we apply an inverse modelling approach to a model accounting for an aneurysm and the distal part of the circulation which can be modified using two independent stiffness parameters. For given values of these parameters, the position of the arterial wall as a function of time is calculated using a forward simulation which takes the fluid-structure interaction (FSI) into account. Using this forward simulation, the correct values of the stiffness parameters are obtained by minimizing a cost function, which is defined as the difference between the forward simulation and a measurement. The minimization is performed by means of surrogate-based optimization using a Kriging model combined with the expected improvement infill criterion. The results show that the stiffness parameters converge to the correct values, both for a zero-dimensional and for a three-dimensional model of the aneurysm

    mfEGRA: Multifidelity Efficient Global Reliability Analysis through Active Learning for Failure Boundary Location

    Full text link
    This paper develops mfEGRA, a multifidelity active learning method using data-driven adaptively refined surrogates for failure boundary location in reliability analysis. This work addresses the issue of prohibitive cost of reliability analysis using Monte Carlo sampling for expensive-to-evaluate high-fidelity models by using cheaper-to-evaluate approximations of the high-fidelity model. The method builds on the Efficient Global Reliability Analysis (EGRA) method, which is a surrogate-based method that uses adaptive sampling for refining Gaussian process surrogates for failure boundary location using a single-fidelity model. Our method introduces a two-stage adaptive sampling criterion that uses a multifidelity Gaussian process surrogate to leverage multiple information sources with different fidelities. The method combines expected feasibility criterion from EGRA with one-step lookahead information gain to refine the surrogate around the failure boundary. The computational savings from mfEGRA depends on the discrepancy between the different models, and the relative cost of evaluating the different models as compared to the high-fidelity model. We show that accurate estimation of reliability using mfEGRA leads to computational savings of ∌\sim46% for an analytic multimodal test problem and 24% for a three-dimensional acoustic horn problem, when compared to single-fidelity EGRA. We also show the effect of using a priori drawn Monte Carlo samples in the implementation for the acoustic horn problem, where mfEGRA leads to computational savings of 45% for the three-dimensional case and 48% for a rarer event four-dimensional case as compared to single-fidelity EGRA

    Multifidelity Uncertainty Propagation via Adaptive Surrogates in Coupled Multidisciplinary Systems

    Get PDF
    Fixed point iteration is a common strategy to handle interdisciplinary coupling within a feedback-coupled multidisciplinary analysis. For each coupled analysis, this requires a large number of disciplinary high-fidelity simulations to resolve the interactions between different disciplines. When embedded within an uncertainty analysis loop (e.g., with Monte Carlo sampling over uncertain parameters), the number of high-fidelity disciplinary simulations quickly becomes prohibitive, because each sample requires a fixed point iteration and the uncertainty analysis typically involves thousands or even millions of samples. This paper develops a method for uncertainty quantification in feedback-coupled systems that leverage adaptive surrogates to reduce the number of cases forwhichfixedpoint iteration is needed. The multifidelity coupled uncertainty propagation method is an iterative process that uses surrogates for approximating the coupling variables and adaptive sampling strategies to refine the surrogates. The adaptive sampling strategies explored in this work are residual error, information gain, and weighted information gain. The surrogate models are adapted in a way that does not compromise the accuracy of the uncertainty analysis relative to the original coupled high-fidelity problem as shown through a rigorous convergence analysis.United States. Army Research Office. Multidisciplinary University Research Initiative (Award FA9550-15-1-0038

    Performance study of multi-fidelity gradient enhanced kriging

    Get PDF
    Multi-fidelity surrogate modelling offers an efficient way to approximate computationally expensive simulations. In particular, Kriging-based surrogate models are popular for approximating deterministic data. In this work, the performance of Kriging is investigated when multi-fidelity gradient data is introduced along with multi-fidelity function data to approximate computationally expensive black-box simulations. To achieve this, the recursive CoKriging formulation is extended by incorporating multi-fidelity gradient information. This approach, denoted by Gradient-Enhanced recursive CoKriging (GECoK), is initially applied to two analytical problems. As expected, results from the analytical benchmark problems show that additional gradient information of different fidelities can significantly improve the accuracy of the Kriging model. Moreover, GECoK provides a better approximation even when the gradient information is only partially available. Further comparison between CoKriging, Gradient Enhanced Kriging, denoted by GEK, and GECoK highlights various advantages of employing single and multi-fidelity gradient data. Finally, GECoK is further applied to two real-life examples
    • 

    corecore