10,381 research outputs found

    Multilevel decomposition approach to the preliminary sizing of a transport aircraft wing

    Get PDF
    A multilevel/multidisciplinary optimization scheme for sizing an aircraft wing structure is described. A methodology using nonlinear programming in application to a very large engineering problem is presented. This capability is due to the decomposition approach. Over 1300 design variables are considered for this nonlinear optimization task. In addition, a mathematical link is established coupling the detail of structural sizing to the overall system performance objective, such as fuel consumption. The scheme is implemented as a three level system analyzing aircraft mission performance at the top level, the total aircraft structure as the middle level, and individual stiffened wing skin cover panels at the bottom level. Numerical show effectiveness of the method and its good convergence characteristics

    Approximate Models and Robust Decisions

    Full text link
    Decisions based partly or solely on predictions from probabilistic models may be sensitive to model misspecification. Statisticians are taught from an early stage that "all models are wrong", but little formal guidance exists on how to assess the impact of model approximation on decision making, or how to proceed when optimal actions appear sensitive to model fidelity. This article presents an overview of recent developments across different disciplines to address this. We review diagnostic techniques, including graphical approaches and summary statistics, to help highlight decisions made through minimised expected loss that are sensitive to model misspecification. We then consider formal methods for decision making under model misspecification by quantifying stability of optimal actions to perturbations to the model within a neighbourhood of model space. This neighbourhood is defined in either one of two ways. Firstly, in a strong sense via an information (Kullback-Leibler) divergence around the approximating model. Or using a nonparametric model extension, again centred at the approximating model, in order to `average out' over possible misspecifications. This is presented in the context of recent work in the robust control, macroeconomics and financial mathematics literature. We adopt a Bayesian approach throughout although the methods are agnostic to this position

    Quantification of airfoil geometry-induced aerodynamic uncertainties - comparison of approaches

    Full text link
    Uncertainty quantification in aerodynamic simulations calls for efficient numerical methods since it is computationally expensive, especially for the uncertainties caused by random geometry variations which involve a large number of variables. This paper compares five methods, including quasi-Monte Carlo quadrature, polynomial chaos with coefficients determined by sparse quadrature and gradient-enhanced version of Kriging, radial basis functions and point collocation polynomial chaos, in their efficiency in estimating statistics of aerodynamic performance upon random perturbation to the airfoil geometry which is parameterized by 9 independent Gaussian variables. The results show that gradient-enhanced surrogate methods achieve better accuracy than direct integration methods with the same computational cost

    Investigation of robust optimization and evidence theory with stochastic expansions for aerospace applications under mixed uncertainty

    Get PDF
    One of the primary objectives of this research is to develop a method to model and propagate mixed (aleatory and epistemic) uncertainty in aerospace simulations using DSTE. In order to avoid excessive computational cost associated with large scale applications and the evaluation of Dempster Shafer structures, stochastic expansions are implemented for efficient UQ. The mixed UQ with DSTE approach was demonstrated on an analytical example and high fidelity computational fluid dynamics (CFD) study of transonic flow over a RAE 2822 airfoil. Another objective is to devise a DSTE based performance assessment framework through the use of quantification of margins and uncertainties. Efficient uncertainty propagation in system design performance metrics and performance boundaries is achieved through the use of stochastic expansions. The technique is demonstrated on: (1) a model problem with non-linear analytical functions representing the outputs and performance boundaries of two coupled systems and (2) a multi-disciplinary analysis of a supersonic civil transport. Finally, the stochastic expansions are applied to aerodynamic shape optimization under uncertainty. A robust optimization algorithm is presented for computationally efficient airfoil design under mixed uncertainty using a multi-fidelity approach. This algorithm exploits stochastic expansions to create surrogate models utilized in the optimization process. To reduce the computational cost, output space mapping technique is implemented to replace the high-fidelity CFD model by a suitably corrected low-fidelity one. The proposed algorithm is demonstrated on the robust optimization of NACA 4-digit airfoils under mixed uncertainties in transonic flow. --Abstract, page iii

    Simplex stochastic collocation with ENO-type stencil selection for robust uncertainty quantification

    Get PDF
    Multi-element uncertainty quantification approaches can robustly resolve the high sensitivities caused by discontinuities in parametric space by reducing the polynomial degree locally to a piecewise linear approximation. It is important to extend the higher degree interpolation in the smooth regions up to a thin layer of linear elements that contain the discontinuity to maintain a highly accurate solution. This is achieved here by introducing Essentially Non-Oscillatory (ENO) type stencil selection into the Simplex Stochastic Collocation (SSC) method. For each simplex in the discretization of the parametric space, the stencil with the highest polynomial degree is selected from the set of candidate stencils to construct the local response surface approximation. The application of the resulting SSC–ENO method to a discontinuous test function shows a sharper resolution of the jumps and a higher order approximation of the percentiles near the singularity. SSC–ENO is also applied to a chemical model problem and a shock tube problem to study the impact of uncertainty both on the formation of discontinuities in time and on the location of discontinuities in space

    Efficient uncertainty quantification in aerospace analysis and design

    Get PDF
    The main purpose of this study is to apply a computationally efficient uncertainty quantification approach, Non-Intrusive Polynomial Chaos (NIPC) based stochastic expansions, to robust aerospace analysis and design under mixed (aleatory and epistemic) uncertainties and demonstrate this technique on model problems and robust aerodynamic optimization. The proposed optimization approach utilizes stochastic response surfaces obtained with NIPC methods to approximate the objective function and the constraints in the optimization formulation. The objective function includes the stochastic measures which are minimized simultaneously to ensure the robustness of the final design to both aleatory and epistemic uncertainties. For model problems with mixed uncertainties, Quadrature-Based and Point-Collocation NIPC methods were used to create the response surfaces used in the optimization process. For the robust airfoil optimization under aleatory (Mach number) and epistemic (turbulence model) uncertainties, a combined Point-Collocation NIPC approach was utilized to create the response surfaces used as the surrogates in the optimization process. Two stochastic optimization formulations were studied: optimization under pure aleatory uncertainty and optimization under mixed uncertainty. As shown in this work for various problems, the NIPC method is computationally more efficient than Monte Carlo methods for moderate number of uncertain variables and can give highly accurate estimation of various metrics used in robust design optimization under mixed uncertainties. This study also introduces a new adaptive sampling approach to refine the Point-Collocation NIPC method for further improvement of the computational efficiency. Two numerical problems demonstrated that the adaptive approach can produce the same accuracy level of the response surface obtained with oversampling ratio of 2 using less function evaluations. --Abstract, page iii

    Developing Efficient Strategies For Global Sensitivity Analysis Of Complex Environmental Systems Models

    Get PDF
    Complex Environmental Systems Models (CESMs) have been developed and applied as vital tools to tackle the ecological, water, food, and energy crises that humanity faces, and have been used widely to support decision-making about management of the quality and quantity of Earth’s resources. CESMs are often controlled by many interacting and uncertain parameters, and typically integrate data from multiple sources at different spatio-temporal scales, which make them highly complex. Global Sensitivity Analysis (GSA) techniques have proven to be promising for deepening our understanding of the model complexity and interactions between various parameters and providing helpful recommendations for further model development and data acquisition. Aside from the complexity issue, the computationally expensive nature of the CESMs precludes effective application of the existing GSA techniques in quantifying the global influence of each parameter on variability of the CESMs’ outputs. This is because a comprehensive sensitivity analysis often requires performing a very large number of model runs. Therefore, there is a need to break down this barrier by the development of more efficient strategies for sensitivity analysis. The research undertaken in this dissertation is mainly focused on alleviating the computational burden associated with GSA of the computationally expensive CESMs through developing efficiency-increasing strategies for robust sensitivity analysis. This is accomplished by: (1) proposing an efficient sequential sampling strategy for robust sampling-based analysis of CESMs; (2) developing an automated parameter grouping strategy of high-dimensional CESMs, (3) introducing a new robustness measure for convergence assessment of the GSA methods; and (4) investigating time-saving strategies for handling simulation failures/crashes during the sensitivity analysis of computationally expensive CESMs. This dissertation provides a set of innovative numerical techniques that can be used in conjunction with any GSA algorithm and be integrated in model building and systems analysis procedures in any field where models are used. A range of analytical test functions and environmental models with varying complexity and dimensionality are utilized across this research to test the performance of the proposed methods. These methods, which are embedded in the VARS–TOOL software package, can also provide information useful for diagnostic testing, parameter identifiability analysis, model simplification, model calibration, and experimental design. They can be further applied to address a range of decision making-related problems such as characterizing the main causes of risk in the context of probabilistic risk assessment and exploring the CESMs’ sensitivity to a wide range of plausible future changes (e.g., hydrometeorological conditions) in the context of scenario analysis

    Finite Strain Homogenization Using a Reduced Basis and Efficient Sampling

    Full text link
    The computational homogenization of hyperelastic solids in the geometrically nonlinear context has yet to be treated with sufficient efficiency in order to allow for real-world applications in true multiscale settings. This problem is addressed by a problem-specific surrogate model founded on a reduced basis approximation of the deformation gradient on the microscale. The setup phase is based upon a snapshot POD on deformation gradient fluctuations, in contrast to the widespread displacement-based approach. In order to reduce the computational offline costs, the space of relevant macroscopic stretch tensors is sampled efficiently by employing the Hencky strain. Numerical results show speed-up factors in the order of 5-100 and significantly improved robustness while retaining good accuracy. An open-source demonstrator tool with 50 lines of code emphasizes the simplicity and efficiency of the method.Comment: 28 page
    • …
    corecore