59,021 research outputs found

    An Automatic Medium to High Fidelity Low-Thrust Global Trajectory Toolchain; EMTG-GMAT

    Get PDF
    Solving the global optimization, low-thrust, multiple-flyby interplanetary trajectory problem with high-fidelity dynamical models requires an unreasonable amount of computational resources. A better approach, and one that is demonstrated in this paper, is a multi-step process whereby the solution of the aforementioned problem is solved at a lower-fidelity and this solution is used as an initial guess for a higher-fidelity solver. The framework presented in this work uses two tools developed by NASA Goddard Space Flight Center: the Evolutionary Mission Trajectory Generator (EMTG) and the General Mission Analysis Tool (GMAT). EMTG is a medium to medium-high fidelity low-thrust interplanetary global optimization solver, which now has the capability to automatically generate GMAT script files for seeding a high-fidelity solution using GMAT's local optimization capabilities. A discussion of the dynamical models as well as thruster and power modeling for both EMTG and GMAT are given in this paper. Current capabilities are demonstrated with examples that highlight the toolchains ability to efficiently solve the difficult low-thrust global optimization problem with little human intervention

    A Generalized Method for Efficient Global Optimization of Antenna Design

    Get PDF
    Efficiency improvement is of great significance for simulation-driven antenna design optimization methods based on evolutionary algorithms (EAs). The two main efficiency enhancement methods exploit data-driven surrogate models and/or multi-fidelity simulation models to assist EAs. However, optimization methods based on the latter either need ad hoc low-fidelity model setup or have difficulties in handling problems with more than a few design variables, which is a main barrier for industrial applications. To address this issue, a generalized three stage multi-fidelity-simulation-model assisted antenna design optimization framework is proposed in this paper. The main ideas include introduction of a novel data mining stage handling the discrepancy between simulation models of different fidelities, and a surrogate-model-assisted combined global and local search stage for efficient high-fidelity simulation model-based optimization. This framework is then applied to SADEA, which is a state-of-the-art surrogate-model-assisted antenna design optimization method, constructing SADEA-II. Experimental results indicate that SADEA-II successfully handles various discrepancy between simulation models and considerably outperforms SADEA in terms of computational efficiency while ensuring improved design quality

    An adaptive multi-fidelity optimization framework based on co-Kriging surrogate models and stochastic sampling with application to coastal aquifer management

    Get PDF
    Surrogate modelling has been used successfully to alleviate the computational burden that results from high-fidelity numerical models of seawater intrusion in simulation-optimization routines. Nevertheless, little attention has been given to multi-fidelity modelling methods to address cases where only limited runs with computationally expensive seawater intrusion models are considered affordable imposing a limiting factor for single-fidelity surrogate-based optimization as well. In this work, a new adaptive multi-fidelity optimization framework is proposed based on co-Kriging surrogate models considering two model fidelities of seawater intrusion. The methodology is tailored to the needs of solving pumping optimization problems with computationally expensive constraint functions and utilizes only small high-fidelity training datasets. Results from both hypothetical and real-world optimization problems demonstrate the efficiency and practicality of the proposed framework to provide a steep improvement of the objective function while it outperforms a comprehensive single-fidelity surrogate-based optimization method. The method can also be used to locate optimal solutions in the region of the global optimum when larger high-fidelity training datasets are available

    A New Cokriging Method for Variable-Fidelity Surrogate Modeling of Aerodynamic Data

    Get PDF
    Cokriging is a statistical interpolation method for the enhanced prediction of a less intensively sampled primary variable of interest with assistance of intensively sampled auxiliary variables. In the geostatistics community it is referred to as two- or multi-variable kriging. In this paper, a new cokriging method is proposed and used for variable-fidelity surrogate modeling of aerodynamic data obtained with an expensive high-fidelity CFD code, assisted by data computed with cheaper lower-fidelity codes or by gradients computed with an adjoint version of the high-fidelity CFD code, or both. A self-contained derivation as well as the numerical implementation of this new cokriging method is presented and the comparison with the autoregressive model of Kennedy and O’Hagan is discussed. The developed cokriging method is validated against an analytical problem and applied to construct global approximation models of the aerodynamic coefficients as well as the drag polar of an RAE 2822 airfoil based on sampled CFD data. The numerical examples show that it is efficient, robust and practical for the surrogate modeling of aerodynamic data based on a set of CFD methods with varying degrees of fidelity and computational expense. It can potentially be applied in the efficient CFD-based aerodynamic analysis and design optimization of aircraft

    Multifidelity modeling for the design of re-entry capsules

    Get PDF
    The design and optimization of space systems presents many challenges associated with the variety of physical domains involved and their coupling. A practical example is the case of satellites and space vehicles designed to re-enter the atmosphere upon completion of their mission [1]. For these systems, aerodynamics and thermodynamics phenomena are strongly coupled and relate to structural dynamics and vibrations, chemical non equilibrium phenomena that characterize the atmosphere, specific re-entry trajectory, and geometrical shape of the body. Blunt bodies are common geometric configurations used in planetary re-entry (e.g. Apollo Command Module, Mars Viking probe, etc.). These geometries permit to obtain high aerodynamic resistance to decelerate the vehicle from orbital speeds along with contained aerodynamic lift for trajectory control. The large radius-of-curvature of the bodies’ nose allows to reduce the heat flux determined by the high temperature effects behind the shock wave. The design and optimization of these bodies would largely benefit from accurate analyses of the re-entry flow field through high-fidelity representations of the aerodynamic and aerothermodynamic phenomena. However, those high-fidelity representations are usually in the form of computer models for the numerical solutions of PDEs (e.g. Navier-Stokes equations, heat equations, etc.) which require significant computational effort and are commonly excluded from preliminary multidisciplinary design and trade-off analysis. This work addresses the integration of high-fidelity computer-based simulations for the multidisciplinary design of space systems conceived for controlled re-entry in the atmosphere. In particular, we discuss the use of multifidelity methods to obtain efficient aerothermodynamic models of the re-entering vehicles. Multifidelity approaches allow to accelerate the exploration and evaluation of design alternatives through the use of different representations of a physical system/process, each characterized by a different level of fidelity and associated computational expense [2, 3]. By efficiently combining less-expensive information from low-fidelity models with a principled selection of few expensive simulations, multifidelity methods allow to incorporate high-fidelity costly information for multidisciplinary design analysis and optimization [4–7]. This presentation proposes a multifidelity Bayesian optimization framework leveraging surrogate models in the form of gaussian processes, which are progressively updated through acquisition functions based on expected improvement. We introduce a novel formulation of the multifideltiy expected improvement including both data-driven and physics-informed utility functions, specifically implemented for the case of the design optimization of an Orion-like atmospheric re-entry vehicle. The results show that the proposed formulation gives better optimization results (lower minimum) than single fidelity Bayesian optimization based on low-fidelity simulations only. The outcome suggests that the multifidelity expected improvement algorithm effectively enriches the information content with the high-fidelity data. Moreover, the computational cost associated with 100 iterations of our multifidelity strategy is sensitively lower than the computational burden of 6 iterations of a single fidelity framework invoking the high-fidelity model. References [1] Gallais, P., Atmospheric re-entry vehicle mechanics, Springer Science and Business Media, 2007. [2] Peherstorfer, B., Willcox, K., and Gunzburger, M., “Survey of Multifidelity Methods in Uncertainty Propagation, Inference, and Optimization,” SIAM Review, Vol. 60, 2018, pp. 550–591. [3] Fernandez-Godino, G., Park, C., Kim, N., and Haftka, R., “Issues in Deciding Whether to Use Multifidelity Surrogates,” AIAA Journal, 2019, p. 16. [4] Mainini, L., and Maggiore, P., “A Multifidelity Approach to Aerodynamic Analysis in an Integrated Design Environment,” AIAA/ASME/ASCE/AHS/ASC Structures, Structural Dynamics and Materials Conference, AIAA, 2012. [5] Goertz, S., Zimmermann, R., and Han, Z. H., “Variable-fidelity and reduced-order models for aero data for loads predictions,” Computational Flight Testing, 2013, pp. 99–112. [6] Meliani, M., Bartoli, N., Lefebvre, T., Bouhlel, M.A., J., Martins, and Morlier, J., “Multi-fidelity efficient global optimization: Methodology and application to airfoil shape design,” AIAA Aviation 2019 Forum, AIAA, 2019. [7] Beran, P., Bryson, D., Thelen, A., Diez, M., and Serani, A., “Comparison of Multi-Fidelity Approaches for Military Vehicle Design,” AIAA Aviation 2020 Forum, AIAA, 2020

    Aerodynamic Optimization of Rocket Control Surface Geometry Using Cartesian Methods and CAD Geometry

    Get PDF
    Aerodynamic design is an iterative process involving geometry manipulation and complex computational analysis subject to physical constraints and aerodynamic objectives. A design cycle consists of first establishing the performance of a baseline design, which is usually created with low-fidelity engineering tools, and then progressively optimizing the design to maximize its performance. Optimization techniques have evolved from relying exclusively on designer intuition and insight in traditional trial and error methods, to sophisticated local and global search methods. Recent attempts at automating the search through a large design space with formal optimization methods include both database driven and direct evaluation schemes. Databases are being used in conjunction with surrogate and neural network models as a basis on which to run optimization algorithms. Optimization algorithms are also being driven by the direct evaluation of objectives and constraints using high-fidelity simulations. Surrogate methods use data points obtained from simulations, and possibly gradients evaluated at the data points, to create mathematical approximations of a database. Neural network models work in a similar fashion, using a number of high-fidelity database calculations as training iterations to create a database model. Optimal designs are obtained by coupling an optimization algorithm to the database model. Evaluation of the current best design then gives either a new local optima and/or increases the fidelity of the approximation model for the next iteration. Surrogate methods have also been developed that iterate on the selection of data points to decrease the uncertainty of the approximation model prior to searching for an optimal design. The database approximation models for each of these cases, however, become computationally expensive with increase in dimensionality. Thus the method of using optimization algorithms to search a database model becomes problematic as the number of design variables is increased

    Global Optimization: Software and Applications

    Get PDF
    Mathematical models are a gateway into both theoretical and experimental understand- ing. However, sometimes these models need certain parameters to be established in order to obtain the optimal behaviour or value. This is done by using an optimization method that obtains certain parameters for optimal behaviour, as described by an objective function that may be a minimum (or maximum) result. Global optimization is a branch of optimization that takes a model and determines the global minimum for a given domain. Global opti- mization can become extremely challenging when the domain yields multiple local minima. Moreover, the complexity of the mathematical model and the consequent lengths of calcu- lations tend to increase the amount of time required for the solver to find the solution. To address these challenges, two software packages were developed to aid a solver in optimizing a black box objective function. The first software package is called Computefarm, a distributed local-resource computing software package that parallelizes the iteration step of a solver by distributing objective function evaluations to idle computers. The second software package is an Optimization Database that is used to monitor the global optimization process by storing information on the objective function evaluation and any extra information on the objective function. The Optimization Database is also used to prevent data from being lost during a failure in the optimization process. In this thesis, both Computefarm and the Optimization Database are used in the context of two particular applications. The first application is quantum error correction gate design. Quantum computers cannot rely on software to correct errors because of the quantum me- chanical properties that allow non-deterministic behaviour in the quantum bit. This means the quantum bits can change states between (0, 1) at any point in time. There are various ways to stabilize the quantum bits; however, errors in the system of quantum bits and the sys- tem to measure the states can occur. Therefore, error correction gates are designed to correct for these different types of errors to ensure a high fidelity in the overall circuit. A simulation of a quantum error correction gate is used to determine the properties of components needed to correct for errors in the circuit of the qubit system. The gate designs for the three-qubit and four-qubit systems are obtained by solving a feasibility problem for the intrinsic fidelity ii(error-correction percentage) to be above the prescribed 99.99% threshold. The Optimization Database is used with the MATLAB ’s Global Search algorithm to obtain the results for the three-qubit and four-qubit systems. The approach used in this thesis yields a faster high- fidelity (≤ 99.99%) three-qubit gate time than obtained previously, and obtained a solution for a fast high-fidelity four-qubit gate time. The second application is Rational Design of Materials, in which global optimization is used to find stable crystal structures of chemical compositions. To predict crystal structures, the enthalpy that determines the stability of the structure is minimized. The Optimization Database is used to store information on the obtained structure that is later used for identification of the crystal structure and Compute- farm is used to speed up the global optimization process. Ten crystal structures for carbon and five crystal structures for silicon-dioxide are obtained by using Global Convergence Par- ticle Swarm Optimization. The stable structures, graphite (carbon) and cristobalite (silicon dioxide), are obtained by using Global Convergence Particle Swarm Optimization. Achieving these results allows for further research on the stable and meta-stable crystal structures to understand various properties like hardness and thermal conductivity

    A Latent Variable Approach for Non-Hierarchical Multi-Fidelity Adaptive Sampling

    Full text link
    Multi-fidelity (MF) methods are gaining popularity for enhancing surrogate modeling and design optimization by incorporating data from various low-fidelity (LF) models. While most existing MF methods assume a fixed dataset, adaptive sampling methods that dynamically allocate resources among fidelity models can achieve higher efficiency in the exploring and exploiting the design space. However, most existing MF methods rely on the hierarchical assumption of fidelity levels or fail to capture the intercorrelation between multiple fidelity levels and utilize it to quantify the value of the future samples and navigate the adaptive sampling. To address this hurdle, we propose a framework hinged on a latent embedding for different fidelity models and the associated pre-posterior analysis to explicitly utilize their correlation for adaptive sampling. In this framework, each infill sampling iteration includes two steps: We first identify the location of interest with the greatest potential improvement using the high-fidelity (HF) model, then we search for the next sample across all fidelity levels that maximize the improvement per unit cost at the location identified in the first step. This is made possible by a single Latent Variable Gaussian Process (LVGP) model that maps different fidelity models into an interpretable latent space to capture their correlations without assuming hierarchical fidelity levels. The LVGP enables us to assess how LF sampling candidates will affect HF response with pre-posterior analysis and determine the next sample with the best benefit-to-cost ratio. Through test cases, we demonstrate that the proposed method outperforms the benchmark methods in both MF global fitting (GF) and Bayesian Optimization (BO) problems in convergence rate and robustness. Moreover, the method offers the flexibility to switch between GF and BO by simply changing the acquisition function

    A MATHEMATICAL AND COMPUTATIONAL FRAMEWORK FOR MULTIFIDELITY DESIGN AND ANALYSIS WITH COMPUTER MODELS

    Get PDF
    A multifidelity approach to design and analysis for complex systems seeks to exploit optimally all available models and data. Existing multifidelity approaches generally attempt to calibrate low-fidelity models or replace low-fidelity analysis results using data from higher fidelity analyses. This paper proposes a fundamentally different approach that uses the tools of estimation theory to fuse together information from multifidelity analyses, resulting in a Bayesian-based approach to mitigating risk in complex system design and analysis. This approach is combined with maximum entropy characterizations of model discrepancy to represent epistemic uncertainties due to modeling limitations and model assumptions. Mathematical interrogation of the uncertainty in system output quantities of interest is achieved via a variance-based global sensitivity analysis, which identifies the primary contributors to output uncertainty and thus provides guidance for adaptation of model fidelity. The methodology is applied to multidisciplinary design optimization and demonstrated on a wing-sizing problem for a high altitude, long endurance vehicle.United States. Air Force Office of Scientific Research. Small Business Technology Transfer Program (Contract FA9550-09-C-0128

    An adaptive sampling and weighted ensemble of surrogate models for high dimensional global optimization problems

    Get PDF
    The modern engineering design optimization relies heavily on high- fidelity computer.  Even though, the computing ability of computers have increased drastically, design optimization based on high-fidelity simulations is still time consuming and impractical.  Surrogate modeling is a technique to replace the high-fidelity simulations.  This paper presents a novel approach, named weighted ensemble of surrogates (WESO) for computationally intensive optimization problems, The focus is on multi-modal functions to identify its global optima with relatively few function evaluations.  WESO search mechanism falls in two steps, explore and fit. The “explore” step is based on exploring the whole design region by generating sample points (agents) using Latin hypercube sampling (LHS) technique to gain prior knowledge about the function of interest (learning phase).  The “fit” step is to train and fit a weighted ensemble of surrogate models over the promising region (training phase) to mimic the computationally intensive true function and replace it with a surrogate model (cheap function).  The surrogates are then utilized to select candidates’ decision variable points at which the true objective function and constraints’ functions to be evaluated.  Weights are then determined, assigned and an ensemble of surrogate gets constructed using the candidate sample points where optimization can be carried out.  WESO has been evaluated on classical benchmark functions embedded in larger dimensional spaces.  WESO was also tested on the aerodynamic shape optimization of turbo-machinery airfoils to demonstrate its ability in handling computationally intensive optimization problems.  The results showed to what extent combinations of models can perform better than single surrogate models and provide insights into the scalability and robustness of the approach. WESO can successfully identify near global solutions, faster than other classical global optimization algorithms
    • …
    corecore