117 research outputs found

    Contact Stress Prediction Model for Variable Hyperbolic Circular Arc Gear Based on the Optimized Kriging-Response Surface Model

    Get PDF
    In order to study the influence of design parameters (pressure angle, tooth width, tooth line radius, modulus, and moment) on contact stress of variable hyperbolic circular arc gear (VHCAG) and to obtain the best manufacturing parameters, The Kriging-Response Surface Model, a hybrid surrogate model with adaptive quantum particle swarm optimization (QPSO) algorithm was proposed to establish the expression prediction model for the relation between design parameters and contact stress. An intelligent quantum particle swarm optimization algorithm based on adaptive weight and natural selection is proposed to optimize the parameters of Gaussian variation function of the kriging surrogate model to improve its fitting accuracy. The global search ability of quantum particles is improved, and the accuracy and stability of the algorithm are improved by adjusting the weight of quantum particles adaptively and by optimizing the elimination iteration process, and the response relationship between design parameters and contact stress was established. The binomial response surface model of gear design parameters and contact stress is established based on the output obtained through the improved kriging model; this simplifies the complex expression of the kriging model. The effects of parameters and their cross-terms on contact stress are analysed based on the contact stress prediction model established by using the optimized Kriging-Response Surface Model hybrid surrogate model. The hybrid Kriging-Response Surface Model surrogate model lays a foundation for the research on the reliability and robust optimization of cylindrical gears with variable hyperbolic arc tooth profile

    A Data Mining Methodology for Vehicle Crashworthiness Design

    Get PDF
    This study develops a systematic design methodology based on data mining theory for decision-making in the development of crashworthy vehicles. The new data mining methodology allows the exploration of a large crash simulation dataset to discover the underlying relationships among vehicle crash responses and design variables at multiple levels and to derive design rules based on the whole-vehicle safety requirements to make decisions about component-level and subcomponent-level design. The method can resolve a major issue with existing design approaches related to vehicle crashworthiness: that is, limited abilities to explore information from large datasets, which may hamper decision-making in the design processes. At the component level, two structural design approaches were implemented for detailed component design with the data mining method: namely, a dimension-based approach and a node-based approach to handle structures with regular and irregular shapes, respectively. These two approaches were used to design a thin-walled vehicular structure, the S-shaped beam, against crash loading. A large number of design alternatives were created, and their responses under loading were evaluated by finite element simulations. The design variables and computed responses formed a large design dataset. This dataset was then mined to build a decision tree. Based on the decision tree, the interrelationships among the design parameters were revealed, and design rules were generated to produce a set of good designs. After the data mining, the critical design parameters were identified and the design space was reduced, which can simplify the design process. To partially replace the expensive finite element simulations, a surrogate model was used to model the relationships between design variables and response. Four machine learning algorithms, which can be used for surrogate model development, were compared. Based on the results, Gaussian process regression was determined to be the most suitable technique in the present scenario, and an optimization process was developed to tune the algorithm’s hyperparameters, which govern the model structure and training process. To account for engineering uncertainty in the data mining method, a new decision tree for uncertain data was proposed based on the joint probability in uncertain spaces, and it was implemented to again design the S-beam structure. The findings show that the new decision tree can produce effective decision-making rules for engineering design under uncertainty. To evaluate the new approaches developed in this work, a comprehensive case study was conducted by designing a vehicle system against the frontal crash. A publicly available vehicle model was simplified and validated. Using the newly developed approaches, new component designs in this vehicle were generated and integrated back into the vehicle model so their crash behavior could be simulated. Based on the simulation results, one can conclude that the designs with the new method can outperform the original design in terms of measures of mass, intrusion and peak acceleration. Therefore, the performance of the new design methodology has been confirmed. The current study demonstrates that the new data mining method can be used in vehicle crashworthiness design, and it has the potential to be applied to other complex engineering systems with a large amount of design data

    Development of advanced criteria for blade root design and optimization

    Get PDF
    In gas and steam turbine engines, blade root attachments are considered as critical components which require special attention for design. The traditional method of root design required high experienced engineers yet the strength of the material was not fully exploited in most cases. In the current thesis, different methodologies for automatic design and optimization of the blade root has been evaluated. Moreover, some methods for reducing the computational time have been proposed. First, a simplified analytical model of the fir-tree was developed in order to evaluate mean stress in different sections of the blade root and disc groove. Then, a more detailed two-dimensional shape of the attachment capable to be analyzed in finite element (FE) analysis was developed for dovetail and fir-tree. The model was developed to be general in a way to include all possible shapes of the attachment. Then the projection of the analytical model over the 2D model was performed to compare the results obtained from analytical and FE methods. This comparison is essential in the later use of analytical evaluation of the fir-tree as a reduction technique of searching domain optimization. Moreover, the possibility of predicting the contact normal stress of the blade and disc attachment by the use of a punch test was evaluated. A puncher composed of a flat surface and rounded edge was simulated equivalent to a sample case of a dovetail. The stress profile of the contact in analytical, 2d and 3d for puncher and dovetail was compared. As an optimizer Genetic Algorithm (GA) was described and different rules affecting this algorithm was introduced. In order to reduce the number of callbacks to high fidelity finite element (FE) method, the surrogate functions were evaluated and among them, the Kriging function was selected to be constructed for use in the current study. Its efficiency was evaluated within a numerical optimization of a single lob. In this study, the surrogate model is not used solely in finding the optimum of the attachment shape as it may provide low accuracy but in order to benefit its fast evaluation and diminish its low accuracy drawback, the Kriging function (KRG) was used within GA as a pre-evaluation of the candidate before performing FE analysis. Moreover, the feasible and non-feasible space in a multi-dimensional complex searching domain of the attachment geometry is explained and also the challenge of a multi-district domain is tackled with a new mutation operation. In order to rectify the non-continuous domain, an adaptive penalty method based on Latin Hypercube Sampling (LHS) was proposed which could successfully improve the optimization convergence. Furthermore, different topologies of the contact in a dovetail were assessed. Four different types of contact were modeled and optimized under the same loading and boundary conditions. The punch test was also assessed with different contact shapes. In addition, the state of stress for the dovetail in different rotational speed with different types of contact was assessed. In the results and discussion, an optimization of a dovetail with the analytical approach was performed and the optimum was compared with the one obtained by FE analysis. It was found that the analytical approach has the advantage of fast evaluation and if constraints are well defined the results are comparable to the FE solution. Then, a Kriging function was embedded within the GA optimization and the approach was evaluated in an optimization of a dovetail. The results revealed that the low computational cost of the surrogate model is an advantage and the low accuracy would be diminished in a collaboration of FE and surrogate models. Later, the capability of employing the analytical approach in a fir-tree optimization is assessed. As the fir-tree geometry has a higher complexity working domain in comparison to the dovetail, the results would be consistent for the dovetail also. Different methods are assessed and compared. In the first attempt, the analytical approach was adopted as a filter to select out the least probable fit candidates. This method could provide a 7\% improvement in convergence. In another attempt, the proposed adaptive penalty method was added to the optimization which successfully found the reasonable optimum with 47\% reduction in computational cost. Later, a combination of analytical and FE models was joined in a multi-objective multi-level optimization which provided 32\% improvement with less error comparing to the previous method. In the last evaluation of this type, the analytical approach was solely used in a multi-objective optimization in which the results were selected according to an FE evaluation of most fit candidates. This approach although provided 86\% improvement in computational time reduction but it depends highly on the case under investigation and provides low accuracy in the final solution. Furthermore, a robust optimum was found for both dovetail and fir-tree in a multi-objective optimization. In this trial, the proposed adaptive penalty method in addition to the surrogate model was also involved

    A framework for combining model calibration with model-based optimization in virtual engineering design workflows

    Get PDF
    In recent years, the development of complex engineering products has seen a movement towards increasing levels of virtualisation using expensive black-box simulations. One of the main factors driving this trend is the rapid increase in available computational resources. As computational capabilities are further developed, projects which used to be infeasible are now possible. When using a virtual engineering design process, once the structure of the simulation model has been built, there is a need to perform both calibration and optimization in order to ensure that the resulting outputs presented to a decision maker correctly represent the optimal solutions. Both of these stages require the use of model evaluations to determine the efficacy of new parameterizations and designs. Such usage becomes a problem when there is only a limited budget of evaluations available within the design process for both stages. This problem is reinforced further by the current practice of considering the two stages as separate problems where there is only a limited transfer of knowledge between them, rather than a linked process. The question that is posed within this research is whether there would be any benefits to adopting a linked approach to the calibration and optimization of expensive multi-objective design problems. In order to determine an answer to this question, it is first essential to set out a mathematical formulation for the joint problem of calibration and optimisation. In order to assess any newly developed methods, it is necessary to devise a set of benchmark problems that contain both model parameters and control inputs that are required to be determined. This is achieved through the adaptation of pre-existing problems from the optimization literature as well as the development of a new component that fits within the Walking Fish Group (WFG) framework. A new alternating combined methodology is developed with the aim of gaining information within more relevant areas of the search space to improve the efficiency of the evaluations used. This new alternating method is further expanded to incorporate surrogates with the aim of improving knowledge sharing between the stages of model calibration and optimization. It is found that the use of the new alternating method can improve the final parameter sets obtained by the calibration, when compared to the classical approach. The extended alternating approach also offers superior calibration, in addition to achieving faster improvement in convergence of the optimiser to the true Pareto front of optimal designs

    Multi-Fidelity Gaussian Process Emulation And Its Application In The Study Of Tsunami Risk Modelling

    Get PDF
    Investigating uncertainties in computer simulations can be prohibitive in terms of computational costs, since the simulator needs to be run over a large number of input values. Building a statistical surrogate model of the simulator, using a small design of experiments, greatly alleviates the computational burden to carry out such investigations. Nevertheless, this can still be above the computational budget for many studies. We present a novel method that combines both approaches, the multilevel adaptive sequential design of computer experiments (MLASCE) in the framework of Gaussian process (GP) emulators. MLASCE is based on the two major approaches: efficient design of experiments, such as sequential designs, and combining training data of different degrees of sophistication in a so-called multi-fidelity method, or multilevel in case these fidelities are ordered typically for increasing resolutions. This dual strategy allows us to allocate efficiently limited computational resources over simulations of different levels of fidelity and build the GP emulator. The allocation of computational resources is shown to be the solution of a simple optimization problem in a special case where we theoretically prove the validity of our approach. MLASCE is compared with other existing models of multi-fidelity Gaussian process emulation. Gains of orders of magnitudes in accuracy for medium-size computing budgets are demonstrated in numerical examples. MLASCE should be useful in a computer experiment of a natural disaster risk and more than a mere tool for calculating the scale of natural disasters. To show MLASCE meets this expectation, we propose the first end-to-end example of a risk model for household asset loss due to a possible future tsunami. As a follow-up to this proposed framework, MLASCE provides a reliable statistical surrogate to a realistic tsunami risk assessment under a restricted computational resource and provides accurate and instant predictions of future tsunami risks

    Integrated Chemical Processes in Liquid Multiphase Systems

    Get PDF
    The essential principles of green chemistry are the use of renewable raw materials, highly efficient catalysts and green solvents linked with energy efficiency and process optimization in real-time. Experts from different fields show, how to examine all levels from the molecular elementary steps up to the design and operation of an entire plant for developing novel and efficient production processes

    Pharmaceutical development and manufacturing in a Quality by Design perspective: methodologies for design space description

    Get PDF
    In the last decade, the pharmaceutical industry has been experiencing a period of drastic change in the way new products and processes are being conceived, due to the introduction of the Quality by design (QbD) initiative put forth by the pharmaceutical regulatory agencies (such as the Food and Drug Adminstration (FDA) and the European Medicines Agency (EMA)). One of the most important aspects introduced in the QbD framework is that of design space (DS) of a pharmaceutical product, defined as “the multidimensional combination and interaction of input variables (e.g. material attributes) and process parameters that have been demonstrated to provide assurance of quality”. The identification of the DS represents a key advantage for pharmaceutical companies, since once the DS has been approved by the regulatory agency, movements within the DS do not constitute a manufacturing change and therefore do not require any further regulatory post-approval. This translates into an enhanced flexibility during process operation, with significant advantages in terms of productivity and process economics. Mathematical modeling, both first-principles and data-driven, has proven to be a valuable tool to assist a DS identification exercise. The development of advanced mathematical techniques for the determination and maintenance of a design space, as well as the quantification of the uncertainty associated with its identification, is a research area that has gained increasing attention during the last years. The objective of this Dissertation is to develop novel methodologies to assist the (i) determination of the design space of a new pharmaceutical product, (ii) quantify the assurance of quality for a new pharmaceutical product as advocated by the regulatory agencies, (iii) adapt and maintain a design space during plant operation, and (iv) design optimal experiments for the calibration of first-principles mathematical models to be used for design space identification. With respect to the issue of design space determination, a methodology is proposed that combines surrogate-based feasibility analysis and latent-variable modeling for the identification of the design space of a new pharmaceutical product. Projection onto latent structures (PLS) is exploited to obtain a latent representation of the space identified by the model inputs (i.e. raw material properties and process parameters) and surrogate-based feasibility is then used to reconstruct the boundary of the DS on this latent representation, with significant reduction of the overall computational burden. The final result is a compact representation of the DS that can be easily expressed in terms of the original physically-relevant input variables (process parameters and raw material properties) and can then be easily interpreted by industrial practitioners. As regards the quantification of “assurance” of quality, two novel methodologies are proposed to account for the two most common sources of model uncertainty (structural and parametric) in the model-based identification of the DS of a new pharmaceutical product. The first methodology is specifically suited for the quantification of assurance of quality when a PLS model is to be used for DS identification. Two frequentist analytical models are proposed to back-propagate the uncertainty from the quality attributes of the final product to the space identified by the set of raw material properties and process parameters of the manufacturing process. It is shown how these models can be used to identify a subset of input combinations (i.e., raw material properties and process parameters) within which the DS is expected to lie with a given degree of confidence. It is also shown how this reduced space of input combinations (called experiment space) can be used to tailor an experimental campaign for the final assessment of the DS, with a significant reduction of the experimental effort required with respect to a non-tailored experimental campaign. The validity of the proposed methodology is tested on granulation and roll compaction processes, involving both simulated and experimental data. The second methodology proposes a joint Bayesian/latent-variable approach, and the assurance of quality is quantified in terms of the probability that the final product will meet its specifications. In this context, the DS is defined in a probabilistic framework as the set of input combinations that guarantee that the probability that the product will meet its quality specifications is greater than a predefined threshold value. Bayesian multivariate linear regression is coupled with latent-variable modeling in order to obtain a computationally friendly implementation of this probabilistic DS. Specifically, PLS is exploited to reduce the computational burden for the discretization of the input domain and to give a compact representation of the DS. On the other hand, Bayesian multivariate linear regression is used to compute the probability that the product will meet the desired quality for each of the discretization points of the input domain. The ability of the methodology to give a scientifically-driven representation of the probabilistic DS is proved with three case studies involving literature experimental data of pharmaceutical unit operations. With respect to the issue of the maintenance of a design space, a methodology is proposed to adapt in real time a model-based representation of a design space during plant operation in the presence of process-model mismatch. Based on the availability of a first-principles model (FPM) or semi-empirical model for the manufacturing process, together with measurements from plant sensors, the methodology jointly exploits (i) a dynamic state estimator and (ii) feasibility analysis to perform a risk-based online maintenance of the DS. The state estimator is deployed to obtain an up-to-date FPM by adjusting in real-time a small subset of the model parameters. Feasibility analysis and surrogate-based feasibility analysis are used to update the DS in real-time by exploiting the up-to-date FPM returned by the state estimator. The effectiveness of the methodology is shown with two simulated case studies, namely the roll compaction of microcrystalline cellulose and the penicillin fermentation in a pilot scale bioreactor. As regards the design of optimal experiments for the calibration of mathematical models for DS identification, a model-based design of experiments (MBDoE) approach is presented for an industrial freeze-drying process. A preliminary analysis is performed to choose the most suitable process model between different model alternatives and to test the structural consistency of the chosen model. A new experiment is then designed based on this model using MBDoE techniques, in order to increase the precision of the estimates of the most influential model parameters. The results of the MBDoE activity are then tested both in silico and on the real equipment

    Vision 2040: A Roadmap for Integrated, Multiscale Modeling and Simulation of Materials and Systems

    Get PDF
    Over the last few decades, advances in high-performance computing, new materials characterization methods, and, more recently, an emphasis on integrated computational materials engineering (ICME) and additive manufacturing have been a catalyst for multiscale modeling and simulation-based design of materials and structures in the aerospace industry. While these advances have driven significant progress in the development of aerospace components and systems, that progress has been limited by persistent technology and infrastructure challenges that must be overcome to realize the full potential of integrated materials and systems design and simulation modeling throughout the supply chain. As a result, NASA's Transformational Tools and Technology (TTT) Project sponsored a study (performed by a diverse team led by Pratt & Whitney) to define the potential 25-year future state required for integrated multiscale modeling of materials and systems (e.g., load-bearing structures) to accelerate the pace and reduce the expense of innovation in future aerospace and aeronautical systems. This report describes the findings of this 2040 Vision study (e.g., the 2040 vision state; the required interdependent core technical work areas, Key Element (KE); identified gaps and actions to close those gaps; and major recommendations) which constitutes a community consensus document as it is a result of over 450 professionals input obtain via: 1) four society workshops (AIAA, NAFEMS, and two TMS), 2) community-wide survey, and 3) the establishment of 9 expert panels (one per KE) consisting on average of 10 non-team members from academia, government and industry to review, update content, and prioritize gaps and actions. The study envisions the development of a cyber-physical-social ecosystem comprised of experimentally verified and validated computational models, tools, and techniques, along with the associated digital tapestry, that impacts the entire supply chain to enable cost-effective, rapid, and revolutionary design of fit-for-purpose materials, components, and systems. Although the vision focused on aeronautics and space applications, it is believed that other engineering communities (e.g., automotive, biomedical, etc.) can benefit as well from the proposed framework with only minor modifications. Finally, it is TTT's hope and desire that this vision provides the strategic guidance to both public and private research and development decision makers to make the proposed 2040 vision state a reality and thereby provide a significant advancement in the United States global competitiveness

    Circuit Design

    Get PDF
    Circuit Design = Science + Art! Designers need a skilled "gut feeling" about circuits and related analytical techniques, plus creativity, to solve all problems and to adhere to the specifications, the written and the unwritten ones. You must anticipate a large number of influences, like temperature effects, supply voltages changes, offset voltages, layout parasitics, and numerous kinds of technology variations to end up with a circuit that works. This is challenging for analog, custom-digital, mixed-signal or RF circuits, and often researching new design methods in relevant journals, conference proceedings and design tools unfortunately gives the impression that just a "wild bunch" of "advanced techniques" exist. On the other hand, state-of-the-art tools nowadays indeed offer a good cockpit to steer the design flow, which include clever statistical methods and optimization techniques.Actually, this almost presents a second breakthrough, like the introduction of circuit simulators 40 years ago! Users can now conveniently analyse all the problems (discover, quantify, verify), and even exploit them, for example for optimization purposes. Most designers are caught up on everyday problems, so we fit that "wild bunch" into a systematic approach for variation-aware design, a designer's field guide and more. That is where this book can help! Circuit Design: Anticipate, Analyze, Exploit Variations starts with best-practise manual methods and links them tightly to up-to-date automation algorithms. We provide many tractable examples and explain key techniques you have to know. We then enable you to select and setup suitable methods for each design task - knowing their prerequisites, advantages and, as too often overlooked, their limitations as well. The good thing with computers is that you yourself can often verify amazing things with little effort, and you can use software not only to your direct advantage in solving a specific problem, but also for becoming a better skilled, more experienced engineer. Unfortunately, EDA design environments are not good at all to learn about advanced numerics. So with this book we also provide two apps for learning about statistic and optimization directly with circuit-related examples, and in real-time so without the long simulation times. This helps to develop a healthy statistical gut feeling for circuit design. The book is written for engineers, students in engineering and CAD / methodology experts. Readers should have some background in standard design techniques like entering a design in a schematic capture and simulating it, and also know about major technology aspects
    • …
    corecore