34 research outputs found

    An Integrated Probability-Based Approach for Multiple Response Surface Optimization

    Get PDF
    Nearly all real life systems have multiple quality characteristics where individual modeling and optimization approaches can not provide a balanced compromising solution. Since performance, cost, schedule, and consistency remain the basics of any design process, design configurations are expected to meet several conflicting requirements at the same time. Correlation between responses and model parameter uncertainty demands extra scrutiny and prevents practitioners from studying responses in isolation. Like any other multi-objective problem, multi-response optimization problem requires trade-offs and compromises, which in turn makes the available algorithms difficult to generalize for all design problems. Although multiple modeling and optimization approaches have been highly utilized in different industries, and several software applications are available, there is no perfect solution to date and this is likely to remain so in the future. Therefore, problem specific structure, diversity, and the complexity of the available approaches require careful consideration by the quality engineers in their applications

    Optimal Design of Experiments for Dual-Response Systems

    Get PDF
    abstract: The majority of research in experimental design has, to date, been focused on designs when there is only one type of response variable under consideration. In a decision-making process, however, relying on only one objective or criterion can lead to oversimplified, sub-optimal decisions that ignore important considerations. Incorporating multiple, and likely competing, objectives is critical during the decision-making process in order to balance the tradeoffs of all potential solutions. Consequently, the problem of constructing a design for an experiment when multiple types of responses are of interest does not have a clear answer, particularly when the response variables have different distributions. Responses with different distributions have different requirements of the design. Computer-generated optimal designs are popular design choices for less standard scenarios where classical designs are not ideal. This work presents a new approach to experimental designs for dual-response systems. The normal, binomial, and Poisson distributions are considered for the potential responses. Using the D-criterion for the linear model and the Bayesian D-criterion for the nonlinear models, a weighted criterion is implemented in a coordinate-exchange algorithm. The designs are evaluated and compared across different weights. The sensitivity of the designs to the priors supplied in the Bayesian D-criterion is explored in the third chapter of this work. The final section of this work presents a method for a decision-making process involving multiple objectives. There are situations where a decision-maker is interested in several optimal solutions, not just one. These types of decision processes fall into one of two scenarios: 1) wanting to identify the best N solutions to accomplish a goal or specific task, or 2) evaluating a decision based on several primary quantitative objectives along with secondary qualitative priorities. Design of experiment selection often involves the second scenario where the goal is to identify several contending solutions using the primary quantitative objectives, and then use the secondary qualitative objectives to guide the final decision. Layered Pareto Fronts can help identify a richer class of contenders to examine more closely. The method is illustrated with a supersaturated screening design example.Dissertation/ThesisDoctoral Dissertation Industrial Engineering 201

    RESEARCH AND DEVELOPMENT EFFORT IN DEVELOPING THE OPTIMAL FORMULATIONS FOR NEW TABLET DRUGS

    Get PDF
    Seeking the optimal pharmaceutical formulation is considered one of the most critical research components during the drug development stage. It is also an R&D effort incorporating design of experiments and optimization techniques, prior to scaling up a manufacturing process, to determine the optimal settings of ingredients so that the desirable performance of related pharmaceutical quality characteristics (QCs) specified by the Food and Drug Administration (FDA) can be achieved. It is widely believed that process scale-up potentially results in changes in ingredients and other pharmaceutical manufacturing aspects, including site, equipment, batch size and process, with the purpose of satisfying the clinical and market demand. Nevertheless, there has not been any single comprehensive research work on how to model and optimize the pharmaceutical formulation when scale-up changes occur. Based upon the FDA guidance, the documentation tests for scale-up changes generally include dissolution comparisons and bioequivalence studies. Hence, this research proposes optimization models to ensure the equivalent performance in terms of dissolution and bioequivalence for the pre-change and post-change formulations by extending the existing knowledge of formulation optimization. First, drug professionals traditionally consider the mean of a QC only; however, the variability of the QC of interest is essential because large variability may result in unpredictable safety and efficacy issues. In order to simultaneously take into account the mean and variability of the QC, the Taguchi quality loss concept is applied to the optimization procedure. Second, the standard 2Ă—2 crossover design, which is extensively conducted to evaluate bioequivalence, is incorporated into the ordinary experimental scheme so as to investigate the functional relationships between the characteristics relevant to bioequivalence and ingredient amounts. Third, as many associated FDA and United States Pharmacopeia regulations as possible, regarding formulation characteristics, such as disintegration, uniformity, friability, hardness, and stability, are included as constraints in the proposed optimization models to enable the QCs to satisfy all the related requirements in an efficient manner. Fourth, when dealing with multiple characteristics to be optimized, the desirability function (DF) approach is frequently incorporated into the optimization. Although the weight-based overall DF is usually treated as an objective function to be maximized, this approach has a potential shortcoming: the optimal solutions are extremely sensitive to the weights assigned and these weights are subjective in nature. Moreover, since the existing DF methods consider mean responses only, variability is not captured despite the fact that individuals may differ widely in their responses to a drug. Therefore, in order to overcome these limitations when applying the DF method to a formulation optimization problem, a priority-based goal programming scheme is proposed that incorporates modified DF approaches to account for variability. The successful completion of this research will establish a theoretically sound foundation and statistically rigorous base for the optimal pharmaceutical formulation without loss of generality. It is believed that the results from this research will have the potential to impact a wide range of tasks in the pharmaceutical manufacturing industry

    Deriving Optimal Composite Scores: Relating Observational/Longitudinal Data with a Primary Endpoint

    Get PDF
    In numerous clinical/experimental studies, multiple endpoints are measured on each subject. It is often not clear which of these endpoints should be designated as of primary importance. The desirability function approach is a way of combining multiple responses into a single unitless composite score. The response variables may include multiple types of data: binary, ordinal, count, interval data. Each response variable is transformed to a 0 to1 unitless scale with zero representing a completely undesirable response and one representing the ideal value. In desirability function methodology, weights on individual components can be incorporated to allow different levels of importance to be assigned to different outcomes. The assignment of the weight values are subjective and based on individual or group expert opinion. In this dissertation, it is our goal to find the weights or response variable transformations that optimize an external empirical objective criterion. For example, we find the optimal weights/transformations that minimize the generalized variance of a prediction regression model relating the score and response of an external variable in pre-clinical and clinical data. For application of the weighting/transformation scheme, initial weighting or transformation values must be obtained then calculation of the corresponding value of the composite score follows. Based on the selected empirical model for the analyses, parameter estimates are found using the usual iterative algorithms (e.g., Gauss Newton). A direct search algorithm (e.g., the Nelder-Mead simplex algorithm) is then used for the minimization of a given objective criterion i.e. generalized variance. The finding of optimal weights/transformations can also be viewed as a model building process. Here relative importance levels are given to each variable in the score and less important variables are minimized and essentially eliminated

    Modeling and Optimization of Stochastic Process Parameters in Complex Engineering Systems

    Get PDF
    For quality engineering researchers and practitioners, a wide number of statistical tools and techniques are available for use in the manufacturing industry. The objective or goal in applying these tools has always been to improve or optimize a product or process in terms of efficiency, production cost, or product quality. While tremendous progress has been made in the design of quality optimization models, there remains a significant gap between existing research and the needs of the industrial community. Contemporary manufacturing processes are inherently more complex - they may involve multiple stages of production or require the assessment of multiple quality characteristics. New and emerging fields, such as nanoelectronics and molecular biometrics, demand increased degrees of precision and estimation, that which is not attainable with current tools and measures. And since most researchers will focus on a specific type of characteristic or a given set of conditions, there are many critical industrial processes for which models are not applicable. Thus, the objective of this research is to improve existing techniques by not only expanding their range of applicability, but also their ability to more realistically model a given process. Several quality models are proposed that seek greater precision in the estimation of the process parameters and the removal of assumptions that limit their breadth and scope. An extension is made to examine the effectiveness of these models in both non-standard conditions and in areas that have not been previously investigated. Upon the completion of an in-depth literature review, various quality models are proposed, and numerical examples are used to validate the use of these methodologies

    Supply Chain Risk Management of Liquefied Natural Gas (LNG) in Australia

    Get PDF
    This research examines the supply chain risk management of Australia’s Liquefied Natural Gas (LNG) supply chain. The study develops a risk management methodology based on quality function deployment and 0-1 multiobjective optimization model. The research reveals 33 LNG supply chain risks and 30 risk management strategies (RMSs) for Australian LNG supply chain. Optimal sets of RMSs are found using the methodology which would be beneficial for the LNG risk managers in a limited resources scenario

    Extending principal covariates regression for high-dimensional multi-block data

    Get PDF
    This dissertation addresses the challenge of deciphering extensive datasets collected from multiple sources, such as health habits and genetic information, in the context of studying complex issues like depression. A data analysis method known as Principal Covariate Regression (PCovR) provides a strong basis in this challenge.Yet, analyzing these intricate datasets is far from straightforward. The data often contain redundant and irrelevant variables, making it difficult to extract meaningful insights. Furthermore, these data may involve different types of outcome variables (for instance, the variable pertaining to depression could manifest as a score from a depression scale or a binary diagnosis (yes/no) from a medical professional), adding another layer of complexity.To overcome these obstacles, novel adaptations of PCovR are proposed in this dissertation. The methods automatically select important variables, categorize insights into those originating from a single source or multiple sources, and accommodate various outcome variable types. The effectiveness of these methods is demonstrated in predicting outcomes and revealing the subtle relationships within data from multiple sources.Moreover, the dissertation offers a glimpse of future directions in enhancing PCovR. Implications of extending the method such that it selects important variables are critically examined. Also, an algorithm that has the potential to yield optimal results is suggested. In conclusion, this dissertation proposes methods to tackle the complexity of large data from multiple sources, and points towards where opportunities may lie in the next line of research

    Extending principal covariates regression for high-dimensional multi-block data

    Get PDF
    This dissertation addresses the challenge of deciphering extensive datasets collected from multiple sources, such as health habits and genetic information, in the context of studying complex issues like depression. A data analysis method known as Principal Covariate Regression (PCovR) provides a strong basis in this challenge.Yet, analyzing these intricate datasets is far from straightforward. The data often contain redundant and irrelevant variables, making it difficult to extract meaningful insights. Furthermore, these data may involve different types of outcome variables (for instance, the variable pertaining to depression could manifest as a score from a depression scale or a binary diagnosis (yes/no) from a medical professional), adding another layer of complexity.To overcome these obstacles, novel adaptations of PCovR are proposed in this dissertation. The methods automatically select important variables, categorize insights into those originating from a single source or multiple sources, and accommodate various outcome variable types. The effectiveness of these methods is demonstrated in predicting outcomes and revealing the subtle relationships within data from multiple sources.Moreover, the dissertation offers a glimpse of future directions in enhancing PCovR. Implications of extending the method such that it selects important variables are critically examined. Also, an algorithm that has the potential to yield optimal results is suggested. In conclusion, this dissertation proposes methods to tackle the complexity of large data from multiple sources, and points towards where opportunities may lie in the next line of research
    corecore