149 research outputs found
Evolutionary Multi-Objective Aerodynamic Design Optimization Using CFD Simulation Incorporating Deep Neural Network
An evolutionary multi-objective aerodynamic design optimization method using
the computational fluid dynamics (CFD) simulations incorporating deep neural
network (DNN) to reduce the required computational time is proposed. In this
approach, the DNN infers the flow field from the grid data of a design and the
CFD simulation starts from the inferred flow field to obtain the steady-state
flow field with a smaller number of time integration steps. To show the
effectiveness of the proposed method, a multi-objective aerodynamic airfoil
design optimization is demonstrated. The results indicate that the
computational time for design optimization is suppressed to 57.9% under 96
cores processor conditions
Optimal Energy-Driven Aircraft Design Under Uncertainty
Aerodynamic shape design robust optimization is gaining popularity in the aeronautical industry as it provides optimal solutions that do not deteriorate excessively in the presence of uncertainties. Several approaches exist to quantify uncertainty and, the dissertation deals with the use of risk measures, particularly the Value at Risk (VaR) and the Conditional Value at Risk (CVaR). The calculation of these measures relies on the Empirical Cumulative Distribution Function (ECDF) construction. Estimating the ECDF with a Monte Carlo sampling can require many samples, especially if good accuracy is needed on the probability distribution tails. Furthermore, suppose the quantity of interest (QoI) requires a significant computational effort, as in this dissertation, where has to resort to Computational Fluid Dynamics (CFD) methods. In that case, it becomes imperative to introduce techniques that reduce the number of samples needed or speed up the QoI evaluations while maintaining the same accuracy. Therefore, this dissertation focuses on investigating methods for reducing the computational cost required to perform optimization under uncertainty. Here, two cooperating approaches are introduced: speeding up the CFD evaluations and approximating the statistical measures.
Specifically, the CFD evaluation is sped up by employing a far-field approach, capable of providing better estimations of aerodynamic forces on coarse grids with respect to a classical near-field approach. The advantages and critical points of the implementation of this method are explored in viscous and inviscid test cases.
On the other hand, the approximation of the statistical measure is performed by using the gradient-based method or a surrogate-based approach. Notably, the gradient-based method uses adjoint field solutions to reduce the time required to evaluate them through CFD drastically. Both methods are used to solve the shape optimization of the central section of a Blended Wing Body under uncertainty. Moreover, a multi-fidelity surrogate-based optimization is used for the robust design of a propeller blade.
Finally, additional research work documented in this dissertation focuses on utilizing an optimization algorithm that mixes integer and continuous variables for the robust optimization of High Lift Devices
Efficient use of partially converged simulations in evolutionary optimization
For many real-world optimization problems, evaluating a solution involves running a computationally expensive simulation model. This makes it challenging to use evolutionary algorithms which usually have to evaluate thousands of solutions before converging. On the other hand, in many cases, even a prematurely stopped run of the simulation may serve as a cheaper, albeit less accurate (low fidelity), estimate of the true fitness value. For evolutionary optimization, this opens up the opportunity to decide about the simulation run length for each individual. In this paper, we propose a mechanism that is capable of learning the appropriate simulation run length for each solution. To test our approach, we propose two new benchmark problems, one simple artificial benchmark function and one benchmark based on a computational fluid dynamics simulation scenario to design a toy submarine. As we demonstrate, our proposed algorithm finds good solutions much faster than always using the full computational fluid dynamics simulation and provides much better solution quality than a strategy of progressively increasing the fidelity level over the course of optimization
Multi-Fidelity Design Optimization of Francis Turbine Runner Blades
RÉSUMÉ
Ce projet de thèse propose une méthodologie Multi-Fidelity Design Optimisation (MFDO) qui vise à améliorer l'efficacité du processus de conception en génie mécanique. Cette méthodologie a été développée pour résoudre les problèmes liés à la conception mécanique des roues de turbines hydrauliques. Cette méthode peut être utilisée dans d'autres processus d’optimisation d'ingénierie, surtout si les processus d'optimisation sont coûteux. L'approche MFDO divise le coût informatique entre deux phases, une basse fidélité et une haute-fidélité. Cette méthode permet d'intégrer les avantages des évaluations à basse fidélité et haute-fidélité, et pour équilibrer le coût et la précision requise par chaque niveau de fidélité. Alors que la phase de basse fidélité contient la boucle itérative d'optimisation, la phase haute-fidélité évalue les candidats de conceptions prometteuses et calibre l'optimisation basse fidélité. La nouvelle approche de MFDO propose un Territorial-Based Filtering Algorithm (TBFA) qui relie les deux niveaux de fidélité. Cette méthode traite le problème que l'objectif d'optimisation à basse fidélité est différent de celui de la phase à haute-fidélité. Ce problème est commun dans les optimisations de substitutions basées sur la physique (par exemple en utilisant une analyse d’écoulement non visqueux à la place des évaluations d’écoulement visqueux). En fait, la vraie fonction n’est pas évaluable dans la phase basse fidélité due à l'absence de la physique impliquée dans ces évaluations. Par conséquent, les solutions dominantes de l'optimisation basse fidélité ne sont pas nécessairement dominantes du point de vue du véritable objectif. Par conséquent, le TBFA a été développé pour sélectionner un nombre donné de candidats prometteurs, qui sont dominants dans leurs propres territoires et qui sont assez différents du point de vue géométrique. Tandis que les objectifs de la phase haute-fidélité ne peuvent être évalués directement dans la phase basse-fidélité, certains objectifs peuvent être sélectionnés par des concepteurs chevronnés parmi des caractéristiques de conception, qui sont évaluables et suffisamment bien prédites par les analyses de basse fidélité. Des concepteurs expérimentés sont habitués à associer des objectifs de bas niveau à des bonnes conceptions.
Un grand nombre d'études de cas ont été réalisées dans ce projet pour évaluer les capacités de la méthodologie MFDO proposée. Pour couvrir les différents types de roues de turbines Francis, trois roues différentes ont été choisies. Chacune d'elles avait ses propres défis de conception, qui devaient être pris en charge. Par conséquent, différentes formulations de problèmes d'optimisation ont été étudiées pour trouver la plus appropriée pour chaque problème en main.----------ABSTRACT
This PhD project proposes a Multi-Fidelity Design Optimization (MFDO) methodology that aims to improve the design process efficiency. This methodology has been developed to tackle hydraulic turbine runner design problems, but it can be employed in other engineering optimizations, which have costly computational design processes. The MFDO approach splits the computational burden between low- and high-fidelity phases to integrate benefits of low- and high-fidelity evaluations, and to balance the cost and accuracy required by each level of fidelity. While the low-fidelity phase contains the iterative optimization loop, the high-fidelity phase evaluates promising design candidates and calibrates the low-fidelity optimization. The new MFDO approach proposes a flexible Territorial-Based Filtering Algorithm (TBFA) that connects the two levels of fidelity. This methodology addresses the problem that the low-fidelity optimization objective is different from the one in the high-fidelity phase. This problem is common in physics-based surrogate optimizations (e.g. using inviscid flow analyses instead of viscous flow evaluations). In fact, the real objective function is not assessable in the low-fidelity phase due to the lack of physics involved in the low-fidelity evaluations. Therefore, the dominant solutions of the low-fidelity optimization are not necessarily dominant from the real objective perspective. Hence, the TBFA has been developed to select a given number of promising candidates, which are dominant in their own territories and geometrically different enough. While high-fidelity objectives cannot be directly evaluated in the low-fidelity phase, some targets can be set by experienced designers for a subset of the design characteristics, which are assessable and sufficiently well predicted by low-fidelity analyses. The designers are accustomed to informally map good low-level targets to overall satisfying designs.
A large number of case studies were performed in this project to evaluate the proposed MFDO capabilities. To cover different types of Francis turbine runners, three different runners were chosen. Each of them had its own special design challenges, which needed to be taken care of. Therefore, variant optimization problem formulations were investigated to find the most suitable for each problem at hand. Those formulations involved different optimization configurations built up from proper choices of objective functions, constraints, design variables, and other optimization features such as local or global exploration budgets and their portions of the overall computational resources
Recommended from our members
Facilitating the Use of Optimisation in the Aerodynamic Design of Axial Compressors
There is commercial pressure to design axial compressors exhibiting high levels of performance more quickly. This is despite the performance of these machines approaching an asymptote in recent years, with further gains becoming increasingly difficult to achieve. One tool that can be used to help is optimisation, effectively harnessing the speed of computational analysis to accelerate the design process and unlock additional performance improvements. The greatest potential for optimisation exists at the preliminary design stage, however, current methodologies struggle when applied at this early point in the design process due to inadequate problem formulations, an inability to fulfil the role of enhancing designer understanding and a lack of high-fidelity analysis due to computational cost. The goal of this thesis is to facilitate the use of optimisation in the preliminary aerodynamic design of axial compressors by developing an improved methodology that overcomes these limitations.
The multiple dominance relations (MDR) formulation enables a larger number of performance parameters to be incorporated in a way that accurately reflects the desires of the designer. This is implemented within a Tabu Search (TS) that is capable of providing interpretable design development information to enhance designer understanding. The combined MDRTS algorithm, overcoming the limitations associated with formulation and understanding, outperforms existing methods when applied to analytic, aerofoil and six-stage axial compressor test cases, generating computational savings of up to 80%.
Multi-fidelity techniques are used to accelerate the search by conducting analysis on a "need-to-know'' basis. Computational savings of over 70% are observed compared to the single-fidelity version of the algorithm across the analytic, aerofoil and six-stage axial compressor test cases, enabling high-fidelity analysis to be employed in a computationally efficient manner. The resultant methodology represents a novel and inherently flexible multi-level multi-fidelity optimisation technique.
Application to an N-stage axial compressor test case, in which the optimiser is given control over the number of stages in the machine, demonstrates the capabilities of the accelerated MDRTS approach. The complex design space is effectively navigated, generating computational savings of over 90% compared to existing methodologies and producing designs that are more likely to be of interest to the designer. Interpretable design development information is also provided for this problem to enhance designer understanding. These results show that the improved methodology successfully facilitates the use of optimisation in the preliminary aerodynamic design of axial compressors, overcoming the problems associated with formulation, understanding and speed that limit existing approaches
不確実性下での設計に対するMulti-Fidelity不確定性定量化とSurrogate-Based Memeticアルゴリズム
学位の種別: 課程博士審査委員会委員 : (主査)東京大学教授 土屋 武司, 東京大学教授 鈴木 真二, 東京大学教授 李家 賢一, 東京大学准教授 大山 聖, 東北大学准教授 下山 幸治University of Tokyo(東京大学
A Bayesian Approach to Computer Model Calibration and Model-Assisted Design
Computer models of phenomena that are difficult or impossible to study directly are critical for enabling research and assisting design in many areas. In order to be effective, computer models must be calibrated so that they accurately represent the modeled phenomena. There exists a rich variety of methods for computer model calibration that have been developed in recent decades. Among the desiderata of such methods is a means of quantifying remaining uncertainty after calibration regarding both the values of the calibrated model inputs and the model outputs. Bayesian approaches to calibration have met this need in recent decades. However, limitations remain. Whereas in model calibration one finds point estimates or distributions of calibration inputs in order to induce the model to reflect reality accurately, interest in a computer model often centers primarily on its use for model-assisted design, in which the goal is to find values for design inputs to induce the modeled system to approximate some target outcome. Existing Bayesian approaches are limited to the first of these two tasks. The present work develops an approach adapting Bayesian methods for model calibration for application in model-assisted design. The approach retains the benefits of Bayesian calibration in accounting for and quantifying all sources of uncertainty. It is capable of generating a comprehensive assessment of the Pareto optimal inputs for a multi-objective optimization problem. The present work shows that this approach can apply as a method for model-assisted design using a previously calibrated system, and can also serve as a method for model-assisted design using a model that still requires calibration, accomplishing both ends simultaneously
Efficient Algorithms for Computationally Expensive Multifidelity Optimization Problems
Multifidelity optimization problems refer to a class of problems where one is presented with a physical system or mathematical model that can be represented in different levels of fidelity. The term “fidelity” refers to the accuracy of representation, where higher fidelity estimates are more accurate and expensive, while lower fidelity estimates are inaccurate, albeit cheaper. Most common iterative solvers such as those employed in computational fluid dynamics (CFD), finite element analysis (FEA), computational electromagnetics (CEM) etc. can be run with different fine/course meshes or residual error thresholds to yield estimates in various fidelities. In the event an optimization exercise requires their use, it is possible to invoke analysis in various fidelities for different solutions during the course of search. Multifidelity optimization algorithms are the special class of algorithms that are able to deal with analysis in various levels of fidelity. In this thesis, two novel multifidelity optimization algorithms have been developed. The first is to deal with bilevel optimization problems and the second is to deal with robust optimization problems involving iterative solvers. Bilevel optimization problems are particularly challenging as the optimum of an upper level (UL) problem is sought subject to the optimality of a nested lower level (LL) problem. Due to the inherent nested nature, naive implementations consume very significant number of UL and LL evaluations. The proposed multifidelity approach controls the rigour of LL optimization exercise for any given UL solution during the course of search as opposed to undertaking exhaustive LL optimization for every UL solution. Robust optimization problems are yet another class of problems where numerous solutions need to be assessed since the intent is to identify solutions that have both good performance and is also insensitive to unavoidable perturbations in the variable values. Computing the latter metric requires evaluation of numerous solutions in the vicinity of the given solution and not all solutions are worthy of such computation. The proposed multifidelity approach considers pre-converged simulations as lower fidelity estimates and uses them to reduce the computational overhead.
While multi-objective optimization problems have long been in existence, there has been limited attempts in the past to deal with problems where the objectives can be independently computed. For example, the weight of a structure and the maximum stress in the structure are two objectives that can be independently computed. For such classes of problems, an efficient algorithm should ideally evaluate either one or both objectives as opposed of always evaluating both objectives. A novel algorithm is introduced that is capable of selectively evaluating the objectives of the infill solutions. The approach exploits principles of non-dominance and sparse subset selection to facilitate decomposition and through maximization of probabilistic dominance (PD) measure, identifies the infill solutions. Thereafter, for each of these infill solutions, one or more objectives are evaluated based on evaluation status of its closest neighbor and the probability of improvement along each objective.
Finally, there has been significant research interest in recent years to develop efficient algorithms to deal with multimodal, multi-objective optimization problems (MMOPs). Such problems are particulatly challenging as there is a need to identify well distributed and well converged solutions in the objective space along with diverse solutions in the variable space. Existing algorithms for MMOPs still require prohibitive number of function evaluations (often in several thousands).
The algorithms are typically embedded with sophisticated, customized mechanisms that require additional parameters to manage the diversity and convergence in the variable and the objective spaces. A steady-state evolutionary algorithm is introduced in this thesis for solving MMOPs, with a simple design and no additional user-defined parameters that need tuning.
All the developments listed above have been studied using well established benchmarks and real-world examples. The results have been compared with existing state-of-the-art approaches to substantiate the benefits
Development of numerical procedures for turbomachinery optimizaion
This Doctoral Thesis deals with high speed turbomachinery optimization and all those tools employed in the optimization process, mainly the optimization algorithm, the parameterization framework and the automatic CFD-based optimization loop. Optimization itself is not just a mean to improve the performance of a generic system, but can be a powerful instigator that helps gaining insight on the physic phenomena behind the observed improvements.
As for the optimization engine, a novel surrogate-assisted (SA) genetic algorithm for multi-objective optimization problems, namely GeDEA-II-K, was developed. GeDEA-II-K is grounded on the cooperation between a genetic algorithm, namely GeDEA-II, and the Kriging methodology, with the aim at speeding up the optimization process by taking advantage of the surrogate model. The comparison over two- and three-objective test functions revealed the effectiveness of GeDEA-II-K approach.
In order to carry out high speed turbomachinery optimizations, an automatic CFD-based optimization loop built around GeDEA-II-K was constructed. The loop was realized for a UNIX/Linux cluster environment in order to exploit the computational resources of parallel computing. Among the tools, a dedicated parameterization framework for 2D airfoils and 3D blades has been designed based on the displacement filed approach.
The effectiveness of both the CFD-based automatic loop and the parameterization was verified on two real-life multi-objective optimization problems: the 2D shape optimization of a supersonic compressor cascade and the 3D shape optimization of the NASA Rotor 67. To better understand the outcomes of the optimization process, a wide section has been dedicated to supersonic flows and their behavior when forced to work throughout compressor cascades.
The results obtained surely have demonstrated the effectiveness of the optimization approach, and even more have given deep insight on the physic of supersonic flows in the high speed turbomachinery applications that were studied
A Random Forest Assisted Evolutionary Algorithm for Data-Driven Constrained Multi-Objective Combinatorial Optimization of Trauma Systems for publication
Many real-world optimization problems can be
solved by using the data-driven approach only, simply because no
analytic objective functions are available for evaluating candidate
solutions. In this work, we address a class of expensive datadriven
constrained multi-objective combinatorial optimization
problems, where the objectives and constraints can be calculated
only on the basis of large amount of data. To solve this class
of problems, we propose to use random forests and radial basis
function networks as surrogates to approximate both objective
and constraint functions. In addition, logistic regression models
are introduced to rectify the surrogate-assisted fitness evaluations
and a stochastic ranking selection is adopted to further reduce
the influences of the approximated constraint functions. Three
variants of the proposed algorithm are empirically evaluated on
multi-objective knapsack benchmark problems and two realworld
trauma system design problems. Experimental results
demonstrate that the variant using random forest models as
the surrogates are effective and efficient in solving data-driven
constrained multi-objective combinatorial optimization problems
- …