250 research outputs found

    Surrogate-Assisted Unified Optimization Framework for Investigating Marine Structural Design Under Information Uncertainty.

    Full text link
    Structural decisions made in the early stages of marine systems design can have a large impact on future acquisition, maintenance and life-cycle costs. However, owing to the unique nature of early stage marine system design, these critical structure decisions are often made on the basis of incomplete information or knowledge about the design. When coupled with design optimization analysis, the complex, uncertain early stage design environment makes it very difficult to deliver a quantified trade-off analysis for decision making. This work presents a novel decision support method that integrates design optimization, high-fidelity analysis, and modeling of information uncertainty for early stage design and analysis. To support this method this dissertation improves the design optimization methods for marine structures by proposing several novel surrogate modeling techniques and strategies. The proposed work treats the uncertainties that are sourced from limited information in a non-statistical interval uncertainty form. This interval uncertainty is treated as an objective function in an optimization framework in order to explore the impact of information uncertainty on structural design performance. In this examination, the potential structural weight penalty regarding information uncertainty can be quickly identified in early stage, avoiding costly redesign later in the design. This dissertation then continues to explore a balanced computational structure between fidelity and efficiency. A proposed novel variable fidelity approach can be applied to wisely allocate expensive high-fidelity computational simulations. In achieving the proposed capabilities for design optimization, several surrogate modeling methods are developed concerning worst-case estimation, clustered multiple meta-modeling, and mixed variable modeling techniques. These surrogate methods have been demonstrated to significantly improve the efficiency of optimizer in dealing with the challenges of early stage marine structure design.PhDNaval Architecture and Marine EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/133365/1/yanliuch_1.pd

    A robust method for reliability updating with equality information using sequential adaptive importance sampling

    Full text link
    Reliability updating refers to a problem that integrates Bayesian updating technique with structural reliability analysis and cannot be directly solved by structural reliability methods (SRMs) when it involves equality information. The state-of-the-art approaches transform equality information into inequality information by introducing an auxiliary standard normal parameter. These methods, however, encounter the loss of computational efficiency due to the difficulty in finding the maximum of the likelihood function, the large coefficient of variation (COV) associated with the posterior failure probability and the inapplicability to dynamic updating problems where new information is constantly available. To overcome these limitations, this paper proposes an innovative method called RU-SAIS (reliability updating using sequential adaptive importance sampling), which combines elements of sequential importance sampling and K-means clustering to construct a series of important sampling densities (ISDs) using Gaussian mixture. The last ISD of the sequence is further adaptively modified through application of the cross entropy method. The performance of RU-SAIS is demonstrated by three examples. Results show that RU-SAIS achieves a more accurate and robust estimator of the posterior failure probability than the existing methods such as subset simulation.Comment: 38 pages, 6 tables, 9 figure

    State-of-the-art in aerodynamic shape optimisation methods

    Get PDF
    Aerodynamic optimisation has become an indispensable component for any aerodynamic design over the past 60 years, with applications to aircraft, cars, trains, bridges, wind turbines, internal pipe flows, and cavities, among others, and is thus relevant in many facets of technology. With advancements in computational power, automated design optimisation procedures have become more competent, however, there is an ambiguity and bias throughout the literature with regards to relative performance of optimisation architectures and employed algorithms. This paper provides a well-balanced critical review of the dominant optimisation approaches that have been integrated with aerodynamic theory for the purpose of shape optimisation. A total of 229 papers, published in more than 120 journals and conference proceedings, have been classified into 6 different optimisation algorithm approaches. The material cited includes some of the most well-established authors and publications in the field of aerodynamic optimisation. This paper aims to eliminate bias toward certain algorithms by analysing the limitations, drawbacks, and the benefits of the most utilised optimisation approaches. This review provides comprehensive but straightforward insight for non-specialists and reference detailing the current state for specialist practitioners

    A survey on handling computationally expensive multiobjective optimization problems with evolutionary algorithms

    Get PDF
    This is the author accepted manuscript. The final version is available from Springer Verlag via the DOI in this record.Evolutionary algorithms are widely used for solving multiobjective optimization problems but are often criticized because of a large number of function evaluations needed. Approximations, especially function approximations, also referred to as surrogates or metamodels are commonly used in the literature to reduce the computation time. This paper presents a survey of 45 different recent algorithms proposed in the literature between 2008 and 2016 to handle computationally expensive multiobjective optimization problems. Several algorithms are discussed based on what kind of an approximation such as problem, function or fitness approximation they use. Most emphasis is given to function approximation-based algorithms. We also compare these algorithms based on different criteria such as metamodeling technique and evolutionary algorithm used, type and dimensions of the problem solved, handling constraints, training time and the type of evolution control. Furthermore, we identify and discuss some promising elements and major issues among algorithms in the literature related to using an approximation and numerical settings used. In addition, we discuss selecting an algorithm to solve a given computationally expensive multiobjective optimization problem based on the dimensions in both objective and decision spaces and the computation budget available.The research of Tinkle Chugh was funded by the COMAS Doctoral Program (at the University of Jyväskylä) and FiDiPro Project DeCoMo (funded by Tekes, the Finnish Funding Agency for Innovation), and the research of Dr. Karthik Sindhya was funded by SIMPRO project funded by Tekes as well as DeCoMo

    An adaptive multi-fidelity sampling framework for safety analysis of connected and automated vehicles

    Full text link
    Testing and evaluation are expensive but critical steps in the development of connected and automated vehicles (CAVs). In this paper, we develop an adaptive sampling framework to efficiently evaluate the accident rate of CAVs, particularly for scenario-based tests where the probability distribution of input parameters is known from the Naturalistic Driving Data. Our framework relies on a surrogate model to approximate the CAV performance and a novel acquisition function to maximize the benefit (information to accident rate) of the next sample formulated through an information-theoretic consideration. In addition to the standard application with only a single high-fidelity model of CAV performance, we also extend our approach to the bi-fidelity context where an additional low-fidelity model can be used at a lower computational cost to approximate the CAV performance. Accordingly, for the second case, our approach is formulated such that it allows the choice of the next sample in terms of both fidelity level (i.e., which model to use) and sampling location to maximize the benefit per cost. Our framework is tested in a widely-considered two-dimensional cut-in problem for CAVs, where Intelligent Driving Model (IDM) with different time resolutions are used to construct the high and low-fidelity models. We show that our single-fidelity method outperforms the existing approach for the same problem, and the bi-fidelity method can further save half of the computational cost to reach a similar accuracy in estimating the accident rate

    EM-driven miniaturization of high-frequency structures through constrained optimization

    Get PDF
    The trends afoot for miniaturization of high-frequency electronic devices require integration of active and passive high-frequency circuit elements within a single system. This high level of accomplishment not only calls for a cutting-edge integration technology but also necessitates accommodation of the corresponding circuit components within a restricted space in applications such as implantable devices, internet of things (IoT), or 5G communication systems. At the same time, size reduction does not remain the only demand. The performance requirements of the abovementioned systems form a conjugate demand to that of the size reduction, yet with a contrasting nature. A compromise can be achieved through constrained numerical optimization, in which two kinds of constrains may exist: equality and inequality ones. Still, the high cost of electromagnetic-based (EM-based) constraint evaluations remains an obstruction. This issue can be partly mitigated by implicit constraint handling using the penalty function approach. Nevertheless, securing its performance requires expensive guess-work-based identification of the optimum setup of the penalty coefficients. An additional challenge lies in allocating the design within or in the vicinity of a thin feasible region corresponding to equality constraints. Furthermore, multimodal nature of constrained miniaturization problems leads to initial design dependency of the optimization results. Regardless of the constraint type and the corresponding treatment techniques, the computational expenses of the optimization-based size reduction persist as a main challenge. This thesis attempts to address the abovementioned issues specifically pertaining to optimization-driven miniaturization of high frequency structures by developing relevant algorithms in a proper sequence. The first proposed approach with automated adjustment of the penalty functions is based on the concept of sufficient constraint violation improvement, thereby eliminating the costly initial trial-and-error stage for the identification of the optimum setup of the penalty factors. Another introduced approach, i.e., correction-based treatment of the equality constraints alleviates the difficulty of allocating the design within a thin feasible region where designs satisfying the equality constraints reside. The next developed technique allows for global size reduction of high-frequency components. This approach not only eliminates the aforementioned multimodality issues, but also accelerates the overall global optimization process by constructing a dimensionality-reduced surrogate model over a pre-identified feasible region as compared to the complete parameter search space. Further to the latter, an optimization framework employing multi-resolution EM-model management has been proposed to address the high cost issue. The said technique provides nearly 50 percent average acceleration of the optimization-based miniaturization process. The proposed technique pivots upon a newly-defined concept of model-fidelity control based on a combination of algorithmic metrics, namely convergence status and constraint violation level. Numerical validation of the abovementioned algorithms has also been provided using an extensive set of high-frequency benchmark structures. To the best of the author´s knowledge, the presented study is the first investigation of this kind in the literature and can be considered a contribution to the state of the art of automated high-frequency design and miniaturization

    UQ and AI: data fusion, inverse identification, and multiscale uncertainty propagation in aerospace components

    Get PDF
    A key requirement for engineering designs is that they offer good performance across a range of uncertain conditions while exhibiting an admissibly low probability of failure. In order to design components that offer good performance across a range of uncertain conditions, it is necessary to take account of the effect of the uncertainties associated with a candidate design. Uncertainty Quantification (UQ) methods are statistical methods that may be used to quantify the effect of the uncertainties inherent in a system on its performance. This thesis expands the envelope of UQ methods for the design of aerospace components, supporting the integration of UQ methods in product development by addressing four industrial challenges. Firstly, a method for propagating uncertainty through computational models in a hierachy of scales is described that is based on probabilistic equivalence and Non-Intrusive Polynomial Chaos (NIPC). This problem is relevant to the design of aerospace components as the computational models used to evaluate candidate designs are typically multiscale. This method was then extended to develop a formulation for inverse identification, where the probability distributions for the material properties of a coupon are deduced from measurements of its response. We demonstrate how probabilistic equivalence and the Maximum Entropy Principle (MEP) may be used to leverage data from simulations with scarce experimental data- with the intention of making this stage of product design less expensive and time consuming. The third contribution of this thesis is to develop two novel meta-modelling strategies to promote the wider exploration of the design space during the conceptual design phase. Design Space Exploration (DSE) in this phase is crucial as decisions made at the early, conceptual stages of an aircraft design can restrict the range of alternative designs available at later stages in the design process, despite limited quantitative knowledge of the interaction between requirements being available at this stage. A histogram interpolation algorithm is presented that allows the designer to interactively explore the design space with a model-free formulation, while a meta-model based on Knowledge Based Neural Networks (KBaNNs) is proposed in which the outputs of a high-level, inexpensive computer code are informed by the outputs of a neural network, in this way addressing the criticism of neural networks that they are purely data-driven and operate as black boxes. The final challenge addressed by this thesis is how to iteratively improve a meta-model by expanding the dataset used to train it. Given the reliance of UQ methods on meta-models this is an important challenge. This thesis proposes an adaptive learning algorithm for Support Vector Machine (SVM) metamodels, which are used to approximate an unknown function. In particular, we apply the adaptive learning algorithm to test cases in reliability analysis.Open Acces
    corecore