42,032 research outputs found

    Numerical product design: Springback prediction, compensation and optimization

    Get PDF
    Numerical simulations are being deployed widely for product design. However, the accuracy of the numerical tools is not yet always sufficiently accurate and reliable. This article focuses on the current state and recent developments in different stages of product design: springback prediction, springback compensation and optimization by finite element (FE) analysis. To improve the springback prediction by FE analysis, guidelines regarding the mesh discretization are provided and a new through-thickness integration scheme for shell elements is launched. In the next stage of virtual product design the product is compensated for springback. Currently, deformations due to springback are manually compensated in the industry. Here, a procedure to automatically compensate the tool geometry, including the CAD description, is presented and it is successfully applied to an industrial automotive part. The last stage in virtual product design comprises optimization. This article presents an optimization scheme which is capable of designing optimal and robust metal forming processes efficiently

    Novel anisotropic continuum-discrete damage model capable of representing localized failure of massive structures. Part II: identification from tests under heterogeneous stress field

    Full text link
    In Part I of this paper we have presented a simple model capable of describing the localized failure of a massive structure. In this part, we discuss the identification of the model parameters from two kinds of experiments: a uniaxial tensile test and a three-point bending test. The former is used only for illustration of material parameter response dependence, and we focus mostly upon the latter, discussing the inverse optimization problem for which the specimen is subjected to a heterogeneous stress field.Comment: 18 pages, 12 figures, 6 table

    Quantile-based optimization under uncertainties using adaptive Kriging surrogate models

    Full text link
    Uncertainties are inherent to real-world systems. Taking them into account is crucial in industrial design problems and this might be achieved through reliability-based design optimization (RBDO) techniques. In this paper, we propose a quantile-based approach to solve RBDO problems. We first transform the safety constraints usually formulated as admissible probabilities of failure into constraints on quantiles of the performance criteria. In this formulation, the quantile level controls the degree of conservatism of the design. Starting with the premise that industrial applications often involve high-fidelity and time-consuming computational models, the proposed approach makes use of Kriging surrogate models (a.k.a. Gaussian process modeling). Thanks to the Kriging variance (a measure of the local accuracy of the surrogate), we derive a procedure with two stages of enrichment of the design of computer experiments (DoE) used to construct the surrogate model. The first stage globally reduces the Kriging epistemic uncertainty and adds points in the vicinity of the limit-state surfaces describing the system performance to be attained. The second stage locally checks, and if necessary, improves the accuracy of the quantiles estimated along the optimization iterations. Applications to three analytical examples and to the optimal design of a car body subsystem (minimal mass under mechanical safety constraints) show the accuracy and the remarkable efficiency brought by the proposed procedure

    Product Design Optimization Under Epistemic Uncertainty

    Get PDF
    abstract: This dissertation is to address product design optimization including reliability-based design optimization (RBDO) and robust design with epistemic uncertainty. It is divided into four major components as outlined below. Firstly, a comprehensive study of uncertainties is performed, in which sources of uncertainty are listed, categorized and the impacts are discussed. Epistemic uncertainty is of interest, which is due to lack of knowledge and can be reduced by taking more observations. In particular, the strategies to address epistemic uncertainties due to implicit constraint function are discussed. Secondly, a sequential sampling strategy to improve RBDO under implicit constraint function is developed. In modern engineering design, an RBDO task is often performed by a computer simulation program, which can be treated as a black box, as its analytical function is implicit. An efficient sampling strategy on learning the probabilistic constraint function under the design optimization framework is presented. The method is a sequential experimentation around the approximate most probable point (MPP) at each step of optimization process. It is compared with the methods of MPP-based sampling, lifted surrogate function, and non-sequential random sampling. Thirdly, a particle splitting-based reliability analysis approach is developed in design optimization. In reliability analysis, traditional simulation methods such as Monte Carlo simulation may provide accurate results, but are often accompanied with high computational cost. To increase the efficiency, particle splitting is integrated into RBDO. It is an improvement of subset simulation with multiple particles to enhance the diversity and stability of simulation samples. This method is further extended to address problems with multiple probabilistic constraints and compared with the MPP-based methods. Finally, a reliability-based robust design optimization (RBRDO) framework is provided to integrate the consideration of design reliability and design robustness simultaneously. The quality loss objective in robust design, considered together with the production cost in RBDO, are used formulate a multi-objective optimization problem. With the epistemic uncertainty from implicit performance function, the sequential sampling strategy is extended to RBRDO, and a combined metamodel is proposed to tackle both controllable variables and uncontrollable variables. The solution is a Pareto frontier, compared with a single optimal solution in RBDO.Dissertation/ThesisPh.D. Industrial Engineering 201

    Variable-speed rotor helicopters: Performance comparison between continuously variable and fixed-ratio transmissions

    Get PDF
    Variable speed rotor studies represent a promising research field for rotorcraft performance improvement and fuel consumption reduction. The problems related to employing a main rotor variable speed are numerous and require an interdisciplinary approach. There are two main variable speed concepts, depending on the type of transmission employed: Fixed Ratio Transmission (FRT) and Continuously Variable Transmission (CVT) rotors. The impact of the two types of transmission upon overall helicopter performance is estimated when both are operating at their optimal speeds. This is done by using an optimization strategy able to find the optimal rotational speeds of main rotor and turboshaft engine for each flight condition. The process makes use of two different simulation tools: a turboshaft engine performance code and a helicopter trim simulation code for steady-state level flight. The first is a gas turbine performance simulator (TSHAFT) developed and validated at the University of Padova. The second is a simple tool used to evaluate the single blade forces and integrate them over the 360 degree-revolution of the main rotor, and thus to predict an average value of the power load required by the engine. The results show that the FRT does not present significant performance differences compared to the CVT for a wide range of advancing speeds. However, close to the two conditions of maximum interest, i.e. hover and cruise forward flight, the discrepancies between the two transmission types become relevant: in fact, engine performance is found to be penalized by FRT, stating that significant fuel reductions can be obtained only by employing the CVT concept. In conclusion, FRT is a good way to reduce fuel consumption at intermediate advancing speeds; CVT advantages become relevant only near hover and high speed cruise condition

    Performance optimization of a leagility inspired supply chain model: a CFGTSA algorithm based approach

    Get PDF
    Lean and agile principles have attracted considerable interest in the past few decades. Industrial sectors throughout the world are upgrading to these principles to enhance their performance, since they have been proven to be efficient in handling supply chains. However, the present market trend demands a more robust strategy incorporating the salient features of both lean and agile principles. Inspired by these, the leagility principle has emerged, encapsulating both lean and agile features. The present work proposes a leagile supply chain based model for manufacturing industries. The paper emphasizes the various aspects of leagile supply chain modeling and implementation and proposes a new Hybrid Chaos-based Fast Genetic Tabu Simulated Annealing (CFGTSA) algorithm to solve the complex scheduling problem prevailing in the leagile environment. The proposed CFGTSA algorithm is compared with the GA, SA, TS and Hybrid Tabu SA algorithms to demonstrate its efficacy in handling complex scheduling problems

    Reliability-based design with system reliability and design improvement

    Get PDF
    This thesis focuses on developing a methodology for accurately estimating series system probability of failure. Existing methods for series system based design optimization are not that accurate because they assign reliability to each failure mode; as a result complete system reliability goes down. According to method proposed in this work, the user will assign required system reliability at the start and then optimizer will apportion reliability to every failure mode in order to meet required system reliability level. Detlevson second order upper bounds are used to estimate system probability of failure. Several examples have been shown to verify the results obtained --Abstract, page iii

    Robust Optimization for Sequential Field Development Planning

    Get PDF
    To achieve high profitability from an oil field, optimizing the field development strategy (e.g., well type, well placement, drilling schedule) before committing to a decision is critically important. The profitability at a given control setting is predicted by running a reservoir simulation model, while determining a robust optimal strategy generally requires many expensive simulations. In this work, we focus on developing practical and efficient methodologies to solving reservoir optimization problems for which the actions that can be controlled are discrete and sequential (e.g., drilling sequence of wells). The type of optimization problems I address must take into account both geological uncertainty and the reduction in uncertainty resulting from observations. As the actions are discrete and sequential, the process can be characterized as sequential decision- making under uncertainty, where past decisions may affect both the possibility of the future choices of actions and the possibility of future uncertainty reduction. This thesis tackles the challenges in sequential optimization by considering three main issues: 1) optimizing discrete-control variables, 2) dealing with geological uncertainty in robust optimization, and 3) accounting for future learning when making optimal decisions. As the first contribution of this work, we develop a practical online-learning method- ology derived from A* search for solving reservoir optimization problems with discrete sets of actions. Sequential decision making can be formulated as finding the path with the maximum reward in a decision tree. To efficiently compute an optimal or near- optimal path, heuristics from relaxed problems are first used to estimate the maximum value constrained to past decisions, and then online-learning techniques are applied to improve the estimation accuracy by learning the errors of the initial approximations ob- tained from previous decision steps. In this way, an accurate estimate of the maximized value can be inexpensively obtained, thereby guiding the search toward the optimal so- lution efficiently. This approach allows for optimization of either a complete strategy with all available actions taken sequentially or only the first few actions at a reduced cost by limiting the search depth. The second contribution is related to robust optimization when an ensemble of reservoir models is used to characterize geological uncertainty. Instead of computing the expectation of an objective function using ensemble-based average value, we develop various bias-correction methods applied to the reservoir mean model to estimate the expected value efficiently without sacrificing accuracy. The key point of this approach is that the bias between the objective-function value obtained from the mean model and the average objective-function value over an ensemble can be corrected by only using information from distinct controls and model realizations. During the optimization process, we only require simulations of the mean model to estimate the expected value using the bias-corrected mean model. This methodology can significantly improve the efficiency of robust optimization and allows for fairly general optimization methods. In the last contribution of this thesis, we address the problem of making optimal decisions while considering the possibility of learning through future actions, i.e., op- portunities to improve the optimal strategy resulting from future uncertainty reduction. To efficiently account for the impact of future information on optimal decisions, we sim- plify the value of information analysis through key information that would help make better future decisions and the key actions that would result in obtaining that informa- tion. In other words, we focus on the use of key observations to reduce the uncertainty in key reservoir features for optimization problems, rather than using all observations to reduce all uncertainties. Moreover, by using supervised-learning algorithms, we can identify the optimal observation subset for key uncertainty reduction automatically and evaluate the information’s reliability simultaneously. This allows direct computation of the posterior probability distribution of key uncertainty based on Bayes’ rule, avoiding the necessity of expensive data assimilation algorithms to update the entire reservoir modeDoktorgradsavhandlin
    • …
    corecore