141 research outputs found

    Single and Multiresponse Adaptive Design of Experiments with Application to Design Optimization of Novel Heat Exchangers

    Get PDF
    Engineering design optimization often involves complex computer simulations. Optimization with such simulation models can be time consuming and sometimes computationally intractable. In order to reduce the computational burden, the use of approximation-assisted optimization is proposed in the literature. Approximation involves two phases, first is the Design of Experiments (DOE) phase, in which sample points in the input space are chosen. These sample points are then used in a second phase to develop a simplified model termed as a metamodel, which is computationally efficient and can reasonably represent the behavior of the simulation response. The DOE phase is very crucial to the success of approximation assisted optimization. This dissertation proposes a new adaptive method for single and multiresponse DOE for approximation along with an approximation-based framework for multilevel performance evaluation and design optimization of air-cooled heat exchangers. The dissertation is divided into three research thrusts. The first thrust presents a new adaptive DOE method for single response deterministic computer simulations, also called SFCVT. For SFCVT, the problem of adaptive DOE is posed as a bi-objective optimization problem. The two objectives in this problem, i.e., a cross validation error criterion and a space-filling criterion, are chosen based on the notion that the DOE method has to make a tradeoff between allocating new sample points in regions that are multi-modal and have sensitive response versus allocating sample points in regions that are sparsely sampled. In the second research thrust, a new approach for multiresponse adaptive DOE is developed (i.e., MSFCVT). Here the approach from the first thrust is extended with the notion that the tradeoff should also consider all responses. SFCVT is compared with three other methods from the literature (i.e., maximum entropy design, maximin scaled distance, and accumulative error). It was found that the SFCVT method leads to better performing metamodels for majority of the test problems. The MSFCVT method is also compared with two adaptive DOE methods from the literature and is shown to yield better metamodels, resulting in fewer function calls. In the third research thrust, an approximation-based framework is developed for the performance evaluation and design optimization of novel heat exchangers. There are two parts to this research thrust. First, is a new multi-level performance evaluation method for air-cooled heat exchangers in which conventional 3D Computational Fluid Dynamics (CFD) simulation is replaced with a 2D CFD simulation coupled with an e-NTU based heat exchanger model. In the second part, the methods developed in research thrusts 1 and 2 are used for design optimization of heat exchangers. The optimal solutions from the methods in this thrust have 44% less volume and utilize 61% less material when compared to the current state of the art microchannel heat exchangers. Compared to 3D CFD, the overall computational savings is greater than 95%

    An Integrated Probability-Based Approach for Multiple Response Surface Optimization

    Get PDF
    Nearly all real life systems have multiple quality characteristics where individual modeling and optimization approaches can not provide a balanced compromising solution. Since performance, cost, schedule, and consistency remain the basics of any design process, design configurations are expected to meet several conflicting requirements at the same time. Correlation between responses and model parameter uncertainty demands extra scrutiny and prevents practitioners from studying responses in isolation. Like any other multi-objective problem, multi-response optimization problem requires trade-offs and compromises, which in turn makes the available algorithms difficult to generalize for all design problems. Although multiple modeling and optimization approaches have been highly utilized in different industries, and several software applications are available, there is no perfect solution to date and this is likely to remain so in the future. Therefore, problem specific structure, diversity, and the complexity of the available approaches require careful consideration by the quality engineers in their applications

    Bi-Objective Optimization Problems—A Game Theory Perspective to Improve Process and Product

    Get PDF
    Publisher Copyright: © 2022 by the authors. This research received no external funding.Cost-effective manufacturing processes or products are no longer the only requirements for business sustainability. An approach based on Game Theory is suggested to find solutions for bi-objective problems. In particular, Stackelberg’s technique is employed and complemented with the Factors Scaling tool to help the users in defining its strategy for optimizing process and product quality characteristics. No subjective information (shape factors, weights, and/or any other preference information) is required from the users, and basic computational background is enough for implementing it. Two case studies provide evidence that the suggested easy-to-use approach can yield nondominated solutions from a small number of Leader–Follower cycles, what reinforces its usefulness for bi-objective optimization problems.publishersversionpublishe

    Development of the D-Optimality-Based Coordinate-Exchange Algorithm for an Irregular Design Space and the Mixed-Integer Nonlinear Robust Parameter Design Optimization

    Get PDF
    Robust parameter design (RPD), originally conceptualized by Taguchi, is an effective statistical design method for continuous quality improvement by incorporating product quality into the design of processes. The primary goal of RPD is to identify optimal input variable level settings with minimum process bias and variation. Because of its practicality in reducing inherent uncertainties associated with system performance across key product and process dimensions, the widespread application of RPD techniques to many engineering and science fields has resulted in significant improvements in product quality and process enhancement. There is little disagreement among researchers about Taguchi\u27s basic philosophy. In response to apparent mathematical flaws surrounding his original version of RPD, researchers have closely examined alternative approaches by incorporating well-established statistical methods, particularly the response surface methodology (RSM), while accepting the main philosophy of his RPD concepts. This particular RSM-based RPD method predominantly employs the central composite design technique with the assumption that input variables are quantitative on a continuous scale. There is a large number of practical situations in which a combination of input variables is of real-valued quantitative variables on a continuous scale and qualitative variables such as integer- and binary-valued variables. Despite the practicality of such cases in real-world engineering problems, there has been little research attempt, if any, perhaps due to mathematical hurdles in terms of inconsistencies between a design space in the experimental phase and a solution space in the optimization phase. For instance, the design space associated with the central composite design, which is perhaps known as the most effective response surface design for a second-order prediction model, is typically a bounded convex feasible set involving real numbers due to its inherent real-valued axial design points; however, its solution space may consist of integer and real values. Along the lines, this dissertation proposes RPD optimization models under three different scenarios. Given integer-valued constraints, this dissertation discusses why the Box-Behnken design is preferred over the central composite design and other three-level designs, while maintaining constant or nearly constant prediction variance, called the design rotatability, associated with a second-order model. Box-Behnken design embedded mixed integer nonlinear programming models are then proposed. As a solution method, the Karush-Kuhn-Tucker conditions are developed and the sequential quadratic integer programming technique is also used. Further, given binary-valued constraints, this dissertation investigates why neither the central composite design nor the Box-Behnken design is effective. To remedy this potential problem, several 0-1 mixed integer nonlinear programming models are proposed by laying out the foundation of a three-level factorial design with pseudo center points. For these particular models, we use standard optimization methods such as the branch-and-bound technique, the outer approximation method, and the hybrid nonlinear based branch-and-cut algorithm. Finally, there exist some special situations during the experimental phase where the situation may call for reducing the number of experimental runs or using a reduced regression model in fitting the data. Furthermore, there are special situations where the experimental design space is constrained, and therefore optimal design points should be generated. In these particular situations, traditional experimental designs may not be appropriate. D-optimal experimental designs are investigated and incorporated into nonlinear programming models, as the design region is typically irregular which may end up being a convex problem. It is believed that the research work contained in this dissertation is the initial examination in the related literature and makes a considerable contribution to an existing body of knowledge by filling research gaps

    Non-Linear Metamodeling Extensions to the Robust Parameter Design of Computer Simulations

    Get PDF
    Robust parameter design (RPD) is used to identify a systems control settings that offer a compromise between obtaining desired mean responses and minimizing the variability about those responses. Two popular combined-array strategies the response surface model (RSM) approach and the emulator approach are limited when applied to simulations. In the former case, the mean and variance models can be inadequate due to a high level of non-linearity within many simulations. In the latter case, precise mean and variance approximations are developed at the expense of extensive Monte Carlo sampling. This research combines the RSM approach\u27s efficiency with the emulator approach\u27s accuracy. Non-linear metamodeling extensions, namely through Kriging and radial basis function neural networks, are made to the RSM approach. The mean and variance of second-order Taylor series approximations of these metamodels are generated via the Multivariate Delta Method and subsequent optimization problems employing these approximations are solved. Results show that improved prediction models can be attained through the proposed approach at a reduced computational cost. Additionally, a multi-response RPD problem solving technique based on desirability functions is presented to produce a solution that is mutually robust across all responses. Lastly, quality measures are developed to provide a holistic assessment of several competing RPD strategies

    A Robust Multi Response Surface Approach for Optimization of Multistage Processes

    Get PDF
    Purpose: In a multistage process, the final quality in the last stage not only depends on the quality of the task performed in that stage but also is dependent on the quality of the products and services in intermediate stages as well as the design parameters in each stage. One of the most efficient statistical approaches used to model the multistage problems is the response surface method (RSM). However, it is necessary to optimize each response in all stages so to achieve the best solution for the whole problem. Robust optimization can produce very accurate solutions in this case. Design/methodology/approach: In order to model a multistage problem, the RSM is often used by the researchers. A classical approach to estimate response surfaces is the ordinary least squares (OLS) method. However, this method is very sensitive to outliers. To overcome this drawback, some robust estimation methods have been presented in the literature. In optimization phase, the global criterion (GC) method is used to optimize the response surfaces estimated by the robust approach in a multistage problem. Findings: The results of a numerical study show that our proposed robust optimization approach, considering both the sum of square error (SSE) index in model estimation and also global criterion (GC) index in optimization phase, will perform better than the classical full information maximum likelihood (FIML) estimation method. Originality/value: To the best of the authors’ knowledge, there are few papers focusing on quality oriented designs in the multistage problem by means of RSM. Development of robust approaches for the response surface estimation and also optimization of the estimated response surfaces are the main novelties in this study. The proposed approach will produce more robust and accurate solutions for multistage problems rather than classical approaches

    RESEARCH AND DEVELOPMENT EFFORT IN DEVELOPING THE OPTIMAL FORMULATIONS FOR NEW TABLET DRUGS

    Get PDF
    Seeking the optimal pharmaceutical formulation is considered one of the most critical research components during the drug development stage. It is also an R&D effort incorporating design of experiments and optimization techniques, prior to scaling up a manufacturing process, to determine the optimal settings of ingredients so that the desirable performance of related pharmaceutical quality characteristics (QCs) specified by the Food and Drug Administration (FDA) can be achieved. It is widely believed that process scale-up potentially results in changes in ingredients and other pharmaceutical manufacturing aspects, including site, equipment, batch size and process, with the purpose of satisfying the clinical and market demand. Nevertheless, there has not been any single comprehensive research work on how to model and optimize the pharmaceutical formulation when scale-up changes occur. Based upon the FDA guidance, the documentation tests for scale-up changes generally include dissolution comparisons and bioequivalence studies. Hence, this research proposes optimization models to ensure the equivalent performance in terms of dissolution and bioequivalence for the pre-change and post-change formulations by extending the existing knowledge of formulation optimization. First, drug professionals traditionally consider the mean of a QC only; however, the variability of the QC of interest is essential because large variability may result in unpredictable safety and efficacy issues. In order to simultaneously take into account the mean and variability of the QC, the Taguchi quality loss concept is applied to the optimization procedure. Second, the standard 2Ă—2 crossover design, which is extensively conducted to evaluate bioequivalence, is incorporated into the ordinary experimental scheme so as to investigate the functional relationships between the characteristics relevant to bioequivalence and ingredient amounts. Third, as many associated FDA and United States Pharmacopeia regulations as possible, regarding formulation characteristics, such as disintegration, uniformity, friability, hardness, and stability, are included as constraints in the proposed optimization models to enable the QCs to satisfy all the related requirements in an efficient manner. Fourth, when dealing with multiple characteristics to be optimized, the desirability function (DF) approach is frequently incorporated into the optimization. Although the weight-based overall DF is usually treated as an objective function to be maximized, this approach has a potential shortcoming: the optimal solutions are extremely sensitive to the weights assigned and these weights are subjective in nature. Moreover, since the existing DF methods consider mean responses only, variability is not captured despite the fact that individuals may differ widely in their responses to a drug. Therefore, in order to overcome these limitations when applying the DF method to a formulation optimization problem, a priority-based goal programming scheme is proposed that incorporates modified DF approaches to account for variability. The successful completion of this research will establish a theoretically sound foundation and statistically rigorous base for the optimal pharmaceutical formulation without loss of generality. It is believed that the results from this research will have the potential to impact a wide range of tasks in the pharmaceutical manufacturing industry

    Decision-maker Trade-offs In Multiple Response Surface Optimization

    Get PDF
    The focus of this dissertation is on improving decision-maker trade-offs and the development of a new constrained methodology for multiple response surface optimization. There are three key components of the research: development of the necessary conditions and assumptions associated with constrained multiple response surface optimization methodologies; development of a new constrained multiple response surface methodology; and demonstration of the new method. The necessary conditions for and assumptions associated with constrained multiple response surface optimization methods were identified and found to be less restrictive than requirements previously described in the literature. The conditions and assumptions required for a constrained method to find the most preferred non-dominated solution are to generate non-dominated solutions and to generate solutions consistent with decision-maker preferences among the response objectives. Additionally, if a Lagrangian constrained method is used, the preservation of convexity is required in order to be able to generate all non-dominated solutions. The conditions required for constrained methods are significantly fewer than those required for combined methods. Most of the existing constrained methodologies do not incorporate any provision for a decision-maker to explicitly determine the relative importance of the multiple objectives. Research into the larger area of multi-criteria decision-making identified the interactive surrogate worth trade-off algorithm as a potential methodology that would provide that capability in multiple response surface optimization problems. The ISWT algorithm uses an ε-constraint formulation to guarantee a non-dominated solution, and then interacts with the decision-maker after each iteration to determine the preference of the decision-maker in trading-off the value of the primary response for an increase in value of a secondary response. The current research modified the ISWT algorithm to develop a new constrained multiple response surface methodology that explicitly accounts for decision-maker preferences. The new Modified ISWT (MISWT) method maintains the essence of the original method while taking advantage of the specific properties of multiple response surface problems to simplify the application of the method. The MISWT is an accessible computer-based implementation of the ISWT. Five test problems from the multiple response surface optimization literature were used to demonstrate the new methodology. It was shown that this methodology can handle a variety of types and numbers of responses and independent variables. Furthermore, it was demonstrated that the methodology can be successful using a priori information from the decision-maker about bounds or targets or can use the extreme values obtained from the region of operability. In all cases, the methodology explicitly considered decision-maker preferences and provided non-dominated solutions. The contribution of this method is the removal of implicit assumptions and includes the decision-maker in explicit trade-offs among multiple objectives or responses
    • …
    corecore