1,141 research outputs found

    Managing Epistemic Uncertainty in Design Models through Type-2 Fuzzy Logic Multidisciplinary Optimization

    Full text link
    Humans have a natural ability to operate in dynamic environments and perform complex tasks with little perceived effort. An experienced ship designer can intuitively understand the general consequences of design choices and the general attributes of a good vessel. A person's knowledge is often ill-structured, subjective, and imprecise, but still incredibly effective at capturing general patterns of the real-world or of a design space. Computers on the other hand, can rapidly perform a large number of precise computations using well-structured, objective mathematical models, providing detailed analyses and formal evaluations of a specfic set of design candidates. In ship design, which involves generating knowledge for decision-making through time, engineers interactively use their own mental models and information gathered from computer-based optimization tools to make decisions which steer a vessel's design. In recent decades, the belief that large synthesis codes can help achieve cutting-edge ship performance has led to an increased popularity of optimization methods, potentially leading to rewarding results. And while optimization has proven fruitful to structural engineering and the aerospace industry, its applicability to early-stage design is more limited for three main reasons. First, mathematical models are by definition a reduction which cannot properly describe all aspects of the ship design problem. Second, in multidisciplinary optimization, a low-fidelity model may incorrectly drive a design, biasing the system level solution. Finally, early-stage design is plagued with limited information, limiting the designer's ability to develop models to inform decisions. This research extends previously done work by incorporating type-2 fuzzy logic into a human-centric multidisciplinary optimization framework. The original framework used type-1 fuzzy logic to incorporate human expertise into optimization models through linguistic variables. However, a type-1 system does not properly account for the uncertainty associated with linguistic terms, and thus does not properly represent the uncertainty associated with a human mental model. This limitation is corrected with the type-2 fuzzy logic multidisciplinary optimization presented in this work, which more accurately models a designer's ability to "communicate, reason and make rational decisions in an environment of imprecision, uncertainty, incompleteness of information and partiality of truth" (Mendel et al., 2010). It uses fuzzy definitions of linguistic variables and rule banks to incorporate "human intelligence" into design models, and better handles the linguistic uncertainty inherent to human knowledge and communication. A general mathematical optimization proof of concept and a planing craft case study are presented in this dissertation to show how mathematical models can be enhanced by incorporating expert opinion into them. Additionally, the planing craft case study shows how human mental models can be leveraged to quickly estimate plausible values of ship parameters when no model exists, increasing the designer's ability to run optimization methods when information is limited.PHDNaval Architecture & Marine EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttps://deepblue.lib.umich.edu/bitstream/2027.42/145891/1/doriancb_1.pd

    APPROXIMATION ASSISTED MULTIOBJECTIVE AND COLLABORATIVE ROBUST OPTIMIZATION UNDER INTERVAL UNCERTAINTY

    Get PDF
    Optimization of engineering systems under uncertainty often involves problems that have multiple objectives, constraints and subsystems. The main goal in these problems is to obtain solutions that are optimum and relatively insensitive to uncertainty. Such solutions are called robust optimum solutions. Two classes of such problems are considered in this dissertation. The first class involves Multi-Objective Robust Optimization (MORO) problems under interval uncertainty. In this class, an entire system optimization problem, which has multiple nonlinear objectives and constraints, is solved by a multiobjective optimizer at one level while robustness of trial alternatives generated by the optimizer is evaluated at the other level. This bi-level (or nested) MORO approach can become computationally prohibitive as the size of the problem grows. To address this difficulty, a new and improved MORO approach under interval uncertainty is developed. Unlike the previously reported bi-level MORO methods, the improved MORO performs robustness evaluation only for optimum solutions and uses this information to iteratively shrink the feasible domain and find the location of robust optimum solutions. Compared to the previous bi-level approach, the improved MORO significantly reduces the number of function calls needed to arrive at the solutions. To further improve the computational cost, the improved MORO is combined with an online approximation approach. This new approach is called Approximation-Assisted MORO or AA-MORO. The second class involves Multiobjective collaborative Robust Optimization (McRO) problems. In this class, an entire system optimization problem is decomposed hierarchically along user-defined domain specific boundaries into system optimization problem and several subsystem optimization subproblems. The dissertation presents a new Approximation-Assisted McRO (AA-McRO) approach under interval uncertainty. AA-McRO uses a single-objective optimization problem to coordinate all system and subsystem optimization problems in a Collaborative Optimization (CO) framework. The approach converts the consistency constraints of CO into penalty terms which are integrated into the subsystem objective functions. In this way, AA-McRO is able to explore the design space and obtain optimum design solutions more efficiently compared to a previously reported McRO. Both AA-MORO and AA-McRO approaches are demonstrated with a variety of numerical and engineering optimization examples. It is found that the solutions from both approaches compare well with the previously reported approaches but require a significantly less computational cost. Finally, the AA-MORO has been used in the development of a decision support system for a refinery case study in order to facilitate the integration of engineering and business decisions using an agent-based approach

    Target Allocation under Uncertainty during the Vehicle Development Process

    Get PDF
    Résumé L’industrie automobile est l’une des plus concurrentielles où les besoins des consommateurs et la technologie sont en changement perpétuel. Pour être compétitif dans ce marché, les constructeurs automobiles ont adopté une stratégie orientée clients où ils enquêtent en permanence sur les besoins des consommateurs pour déceler le plus tôt possible les performances désirées des futurs véhicules, concevoir et commercialiser des voitures innovatrices qui permettent de combler les attentes des consommateurs. Le développement d’un nouveau véhicule suppose la traduction des performances utopiques, qui peuvent être définies par le département de marketing, dans des cibles pour les caractéristiques ingénieurs de ses composantes. Cette approche nécessite lors de la conception du véhicule de prendre des décisions critiques qui peuvent influencer considérablement la compétitivité et la profitabilité de la compagnie. Pendant les premières étapes du processus de développement des véhicules (PDV), les ingénieurs manquent souvent d’informations précises et complètes qui peuvent leurs permettre de prédire les possibilités de rencontrer les performances utopiques du véhicule en question, et ce, à cause de plusieurs facteurs (technologiques, régulation et règlementation, ressources, etc.). Pour cette raison, l’identification, la quantification et la gestion des incertitudes inhérentes aux différentes phases du PDV sont devenues un problème majeur dont dépend l’efficacité du PDV. La présente étude propose une méthodologie pour l’allocation des cibles et pour la prise de décision sous incertitudes durant le processus de développement des véhicules. La méthode commence par la décomposition du nouveau véhicule en une structure hiérarchique à multiniveaux. Cette structure représente l’élément de base pour la définition du modèle de véhicule à multi-niveaux (MVM). Nous avons considéré qu’une ou plusieurs caractéristiques ainsi que leurs cibles peuvent être associées à chaque composante du MVM, et ce, en concordance avec les performances utopiques du véhicule. Les opinions des experts sont exprimées avec des incertitudes inhérentes à la faisabilité de chaque cible. Les opinions des experts sont données sous forme de distributions de probabilité ou d’ensembles d’intervalles associés à des crédibilités subjectives pour les valeurs possibles des caractéristiques. Ces opinions sont ensuite agrégées et propagées depuis les feuilles vers le sommet du modèle. La théorie des Évidences a été utilisée pour exprimer les incertitudes sous forme de deux mesures: la crédibilité et la plausibilité.----------Abstract Under the increasing pressure of the evolving customers’ expectations, the speed and competitiveness of the competitors, automakers have become customer-oriented. They continuously survey the customers’ needs in order to early identify the desired or utopian vehicle performances and strive to fulfill these expectations by designing and marketing quickly new innovative products. The development of a new vehicle supposes the translation of the vehicle performances into its components’ characteristics. Such approach requires making critical design decisions that can impact noticeably the competitiveness and profitability of the company. In the early stages of the vehicle development process, the engineers lack precise and complete information about the possibility to meet the initial utopian vehicle performances due to many factors (technological, regulation, resources, etc.). For that reason, identifying, quantifying and handling the inherent uncertainty throughout the vehicle development process (VDP) became a serious issue, which affects the effectiveness of the design process. This study proposes a methodology for target allocation and decision-making under uncertainty during the VDP. The method starts by the decomposition of the vehicle in hierarchical multilevel structure, which represents the basic framework required for the definition of the vehicle multilevel model (VMM). We have considered that each component in the VMM may have several characteristics, and that a target is defined for every component and characteristic in accordance with the utopian vehicle performances. Experts’ opinions are expressed with uncertainty regarding the feasibility of achieving each target. Experts’ opinions are given in the form of probability distributions or intervals associated with their subjective beliefs for the possible values of the characteristics and then are aggregated and propagated from the leaf nodes of the multilevel model up to the vehicle level. Evidence theory has been used to express uncertainty in the form of belief and plausibility measures. Using this information, two measures regarding the desirability and the achievability of the characteristics are defined. An approach for targets allocation under uncertainty based on the maximization of achievability and desirability measures of the characteristics is proposed and discussed. A methodology to handle large-scale problem based on the merging of intervals by the control of the information granularity without affecting the precision of the belief and plausibility measures is presented

    Sequential Linear Programming Coordination Strategy for Deterministic and Probabilistic Analytical Target Cascading.

    Full text link
    Decision-making under uncertainty is particularly challenging in the case of multidisciplinary, multilevel system optimization problems. Subsystem interactions cause strong couplings, which may be amplified by uncertainty. Thus, effective coordination strategies can be particularly beneficial. Analytical target cascading (ATC) is a deterministic optimization method for multilevel hierarchical systems, which was recently extended to probabilistic design. Solving the optimization problem requires propagation of uncertainty, namely, evaluating or estimating output distributions given random input variables. This uncertainty propagation can be a challenging and computationally expensive task for nonlinear functions, but is relatively easy for linear ones. In order to overcome the difficulty in uncertainty propagation, this dissertation introduces the use of Sequential Linear Programming (SLP) for solving ATC problems, and specifically extends this use for Probabilistic Analytical Target Cascading (PATC) problems. A new coordination strategy is proposed for ATC and PATC, which coordinates linking variables among subproblems using sequential lineralizations. By linearizing and solving a hierarchy of problems successively, the algorithm takes advantage of the simplicity and ease of uncertainty propagation for a linear system. Linearity of subproblems is maintained using an infinite norm to measure deviations between targets and responses. A subproblem suspension strategy is used to temporarily suspend inclusion of subproblems that do not need significant redesign, based on trust region and target value step size. A global convergence proof of the SLP-based coordination strategy is derived. Experiments with test problems show that, relative to standard ATC and PATC coordination, the number of subproblem evaluations is reduced considerably while maintaining accuracy. To demonstrate the applicability of the proposed strategies to problems of practical complexity, a hybrid electric fuel cell vehicle design model, including enterprise, powertrain, fuel cell and battery models, is developed and solved using the new ATC strategy. In addition to engineering uncertainties, the model takes into account unknown behavior by consumers. As a result, expected maximum profit is calculated using probabilistic consumer preferences with engineering constraints satisfied.Ph.D.Mechanical EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/58506/1/shipge_1.pd

    Sensitivity Analysis Based Approaches for Mitigating the Effects of Reducible Interval Input Uncertainty on Single- and Multi-Disciplinary Systems using Multi-Objective Optimization

    Get PDF
    Uncertainty is an unavoidable aspect of engineering systems and will often degrade system performance or perhaps even lead to system failure. As a result, uncertainty must be considered as a part of the design process for all real-world engineering systems. The presence of reducible uncertainty further complicates matters as designers must not only account for the degrading effects of uncertainty but must also determine what levels of uncertainty can be considered as acceptable. For these reasons, methods for determining and effectively mitigating the effects of uncertainty are necessary for solving engineering design problems. This dissertation presents several new methods for use in the design of engineering systems under interval input uncertainty. These new approaches were developed over the course of four interrelated research thrusts and focused on the overall goal of extending the current research in the area of sensitivity analysis based design under reducible interval uncertainty. The first research thrust focused on developing an approach for determining optimal uncertainty reductions given multi-disciplinary engineering systems with multiple output functions at both the system and sub-system levels. The second research thrust extended the approach developed during the first thrust to use uncertainty reduction as a means for both reducing output variations and simultaneously ensuring engineering feasibility. The third research thrust looked at systems where uncertainty reduction alone is insufficient for ensuring feasibility and thus developed a sensitivity analysis approach that combined uncertainty reductions with small design adjustments in an effort to again reduce output variations and ensure feasibility. The fourth and final research thrust looked to relax many of the assumptions required by the first three research thrusts and developed a general sensitivity analysis inspired approach for determining optimal upper and lower bounds for reducible sources of input uncertainty. Multi-objective optimization techniques were used throughout this research to evaluate the tradeoffs between the benefits to be gained by mitigating uncertainty with the costs of making the design changes and/or uncertainty reductions required to reduce or eliminate the degrading effects of system uncertainty most effectively. The validity of the approaches developed were demonstrated using numerical and engineering example problems of varying complexity

    Enhancing optimization capabilities using the AGILE collaborative MDO framework with application to wing and nacelle design

    Get PDF
    This paper presents methodological investigations performed in research activities in the field of Multi-disciplinary Design and Optimization (MDO) for overall aircraft design in the EU funded research project AGILE (2015–2018). In the AGILE project a team of 19 industrial, research and academic partners from Europe, Canada and Russia are working together to develop the next generation of MDO environment that targets significant reductions in aircraft development costs and time to market, leading to cheaper and greener aircraft. The paper introduces the AGILE project structure and describes the achievements of the 1st year that led to a reference distributed MDO system. A focus is then made on different novel optimization techniques studied during the 2nd year, all aiming at easing the optimization of complex workflows that are characterized by a high number of discipline interdependencies and a large number of design variables in the context of multi-level processes and multi-partner collaborative engineering projects. Three optimization strategies are introduced and validated for a conventional aircraft. First, a multi-objective technique based on Nash Games and Genetic Algorithm is used on a wing design problem. Then a zoom is made on the nacelle design where a surrogate-based optimizer is used to solve a mono-objective problem. Finally a robust approach is adopted to study the effects of uncertainty in parameters on the nacelle design process. These new capabilities have been integrated in the AGILE collaborative framework that in the future will be used to study and optimize novel unconventional aircraft configurations

    A Magic Cube Approach for Crashworthiness and Blast Protection Designs of Structural and Material Systems.

    Full text link
    Crashworthiness design is one of the most challenging tasks in automotive product development, and blast protection design is crucial for military operations. The goal is to design an optimal crashworthy or blast-protective structure in terms of topology, shape, and size, for both structural and material layouts. Due to the difficulties in the crash analyses and the complexity of the design problems, previous studies were limited to component-level examinations, or considered only a simple design aspect. In this research, an advanced approach entitled the Magic Cube (MQ) approach is proposed, which for the first time, provides a systematic way to examine general crashworthiness and blast protection designs in terms of both structural and material aspects. The MQ developed in this research consists of three major dimensions: decomposition, design methodology, and general consideration. The decomposition dimension includes the major decomposition approaches developed for the crashworthiness design problems, and it can be applied to the blast protection design. It has three layers: time (process) decomposition, space decomposition, and scale decomposition. The design methodology dimension is related to the methodologies employed in the design process; three layers in this dimension are: target cascading, failure mode management, and the optimization technique. The general consideration dimension has three layers, which are multidisciplinary objectives, loadings, and uncertainties. All these layers are coupled with each other to form a 27-element magic cube. A complicated crashworthiness or blast protection design problem can be solved by employing the appropriate approaches in the MQ, which can be represented by the corresponding elements of the MQ. Examples are given to demonstrate the feasibility and effectiveness of the proposed approach and its successful application in real vehicle crashworthiness, blast protection, and other related design problems. The MQ approach developed in this research can be readily applied to other similar design problems, such as those related to active safety and vehicle rollover.Ph.D.Mechanical EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/58392/1/cqi_1.pd

    IMECE2005-81653 SYSTEM HIERARCHY OPTIMIZATION METHOD FOR MEMS APPLICATIONS

    Get PDF
    ABSTRACT A new design optimization methodology for MicroElectro-Mechanical Systems (MEMS) application is presented. The optimization approach considers minimization of several uncertainty factors on the overall system performance while satisfying target requirements specified in the form of constraints on micro-fabrication processes and materials system. The design process is modeled as a multi-level hierarchical optimal design problem. The design problem is decomposed into two analysis systems; uncertainty effects analysis and performance sensitivity analysis. Each analysis system can be partitioned into several subsystems according to the different functions they perform. The entire problem has been treated as a multi-disciplinary design optimization (MDO) for maximum robustness and performance achievement. In this study, the analysis results are provided as optimized device geometry parameters for the example of the selected micro accelerometer device

    Concurrent optimization using probabilistic analysis of distributed multidisciplinary architectures for design under uncertainty

    Get PDF
    Uncertainty based Multidisciplinary Optimization (UMDO) relies on propagation of uncertainties across several disciplines. A typical aircraft design process involves collaboration of multiple and diverse teams involving high-fidelity disciplinary tools and experts. Therefore, traditional methods such as All-In-One (AIO), which integrates all the disciplines and treats the entire multidisciplinary analysis process as a black box becomes infeasible for uncertainty propagation and analysis. If all the disciplines cannot be tightly integrated, then it is helpful to use a method that conducts uncertainty propagation in each discipline and combines their results into a system level uncertainty. Distributed UMDO methods based on Collaborative Optimization (CO), Concurrent Subs-Space Optimization (CSSO), Analytical Target Cascading (ATC), Bi-Level Integrated System Synthesis (BLISS), etc. use the strategy of decomposition and coordination to carry out distributed uncertainty analysis and optimization by preserving disciplinary autonomy. However, there are shortcomings in these methods which leads to inaccurate quantification of uncertainty at system level. One such disadvantage is the inability to handle statistical dependencies among coupling variables. In most cases, the statistical dependencies manifests due the underlying functional relationship between the variables. Most of the existing distributed UMDO methods in literature assume that the coupling variables are independent of each other. Although, under certain conditions this assumption is valid, nonetheless it may lead to inaccurate estimation of uncertainty quantification at system level if the dependencies of coupling variables are significant and if the system level metric is sensitive to the dependencies. Another limitation in the existing distributed UMDO literature is related to interdisciplinary compatibility. One of the common strategies to achieve interdisciplinary compatibility is by moment matching method. Since, only marginal distributions of coupling variables are considered in the moment matching, it works well when coupling variables are statistically independent. However, when coupling variables are dependent, this strategy does not guarantee that interdisciplinary compatibility is satisfied for every instantiation of uncertain variables. Also, most of these methods assume that the uncertain coupling variables have fixed functional form of probability density function, most commonly a Gaussian density function. This assumption breaks down when the local uncertainties in disciplines are non-Gaussian and disciplines are non-linear functions of input variables which lead to non-Gaussian coupling variables. To overcome these limitations, Probabilistic Analysis of Distributed Multidisciplinary Architectures (PADMA) is developed. PADMA is a bi-level distributed uncertainty based multidisciplinary analysis (UMDA) method which allows each discipline to carry out uncertainty propagation independently and concurrently. It is a non-iterative method in which dependence and interdisciplinary compatibility is handled by evaluating the probability of Event of Interdisciplinary Compatibility (EIC). Probability of EIC is evaluated using conditional probability density functions of disciplinary metrics. A quantile copula regression method is developed which is used to model conditional probability density functions. In quantile copula regression, the probability density functions are modeled by regressing multiple level of quantiles of disciplinary metric, allowing a comprehensive representation of overall distribution without any assumption of functional form of probability density function. Also, quantile copula regression models the dependency between disciplinary metric using copula functions when disciplines have multiple outputs. Finally, a distributed UMDO method, Concurrent Optimization using Probabilistic Analysis of Distributed Multidisciplinary Architectures (CO-PADMA), has been developed using PADMA and quantile copula regression. CO-PADMA is a bi-level distributed UMDO method which allows distributed analysis and optimization, while handling the dependencies and interdisciplinary compatibility, to find optimum design and quantify the uncertainty of system metric accurately. The advantages of the methods developed in this thesis have been demonstrated by their application on analytical and physics based problems.Ph.D

    Multilevel Design Optimization Under Uncertainty with Application to Product-Material Systems

    Get PDF
    The main objective of this research is to develop a computational design tool for multilevel optimization of product-material systems under uncertainty. To accomplish this goal, an exponential penalty function (EPF) formulation based on method of multipliers is developed for solving multilevel optimization problems within the framework of Analytical Target Cascading (ATC). The original all-at-once constrained optimization problem is decomposed into a hierarchical system with consistency constraints enforcing the target-response coupling in the connected elements. The objective function is combined with the consistency constraints in each element to formulate an augmented Lagrangian with EPF. The EPF formulation is implemented using double-loop (EPF I) and single-loop (EPF II) coordination strategies and two penalty-parameter-updating schemes. The computational characteristics of the proposed approaches are investigated using different nonlinear convex and non-convex optimization problems. An efficient reliability-based design optimization method, Single Loop Single Vector (SLSV), is integrated with Augmented Lagrangian (AL) formulation of ATC for solution of hierarchical multilevel optimization problems under uncertainty. In the proposed SLSV+AL approach, the uncertainties are propagated by matching the required moments of connecting responses/targets and linking variables present in the decomposed system. The accuracy and computational efficiency of SLSV+AL are demonstrated through the solution of different benchmark problems and comparison of results with those from other optimization methods. Finally, the developed computational design optimization tool is used for design optimization of hybrid multiscale composite sandwich plates with/without uncertainty. Both carbon nanofiber (CNF) waviness and CNF-matrix interphase properties are included in the model. By decomposing the sandwich plate, structural and material designs are combined and treated as a multilevel optimization problem. The application problem considers the minimum-weight design of an in-plane loaded sandwich plate with a honeycomb core and laminated composite face sheets that are reinforced by both conventional continuous fibers and CNF-enhanced polymer matrix. Besides global buckling, shear crimping, intracell buckling, and face sheet wrinkling are also treated as design constraints
    • …
    corecore