212 research outputs found

    Optimal modelling and experimentation for the improved sustainability of microfluidic chemical technology design

    Get PDF
    Optimization of the dynamics and control of chemical processes holds the promise of improved sustainability for chemical technology by minimizing resource wastage. Anecdotally, chemical plant may be substantially over designed, say by 35-50%, due to designers taking account of uncertainties by providing greater flexibility. Once the plant is commissioned, techniques of nonlinear dynamics analysis can be used by process systems engineers to recoup some of this overdesign by optimization of the plant operation through tighter control. At the design stage, coupling the experimentation with data assimilation into the model, whilst using the partially informed, semi-empirical model to predict from parametric sensitivity studies which experiments to run should optimally improve the model. This approach has been demonstrated for optimal experimentation, but limited to a differential algebraic model of the process. Typically, such models for online monitoring have been limited to low dimensions. Recently it has been demonstrated that inverse methods such as data assimilation can be applied to PDE systems with algebraic constraints, a substantially more complicated parameter estimation using finite element multiphysics modelling. Parametric sensitivity can be used from such semi-empirical models to predict the optimum placement of sensors to be used to collect data that optimally informs the model for a microfluidic sensor system. This coupled optimum modelling and experiment procedure is ambitious in the scale of the modelling problem, as well as in the scale of the application - a microfluidic device. In general, microfluidic devices are sufficiently easy to fabricate, control, and monitor that they form an ideal platform for developing high dimensional spatio-temporal models for simultaneously coupling with experimentation. As chemical microreactors already promise low raw materials wastage through tight control of reagent contacting, improved design techniques should be able to augment optimal control systems to achieve very low resource wastage. In this paper, we discuss how the paradigm for optimal modelling and experimentation should be developed and foreshadow the exploitation of this methodology for the development of chemical microreactors and microfluidic sensors for online monitoring of chemical processes. Improvement in both of these areas bodes to improve the sustainability of chemical processes through innovative technology. (C) 2008 The Institution of Chemical Engineers. Published by Elsevier B.V. All rights reserved

    Regional-scale hydrological modelling using multiple-parameter landscape zones and a quasi-distributed water balance model

    Get PDF
    Regional-scale catchments are characterised typically by natural variability in climatic and land-surface features. This paper addresses the important question regarding the appropriate level of spatial disaggregation necessary to guarantee a hydrologically sound consideration of this variability. Using a simple hydrologic model along with physical catchment data, the problem is reconsidered as a model parameter identification problem. With this manner of thinking the subjective nature as to what to include in the disaggregation scheme is removed and the problem reconsidered in terms of what can be supported by the available data. With such an approach the relative merit of different catchment disaggregation schemes is viewed in terms of their ability to provide constrained parameterisations that can be explained in terms of the physical processes deemed active within a catchment. The outlined methodology was tested for a regional-scale catchment, located in eastern Australia, and involved using the quasi-distributed VIC catchment model to recover the characteristic responses resulting from the disaggregation of the catchment into combinations of climate, soil and vegetation characteristics. A land-surface classification based on a combination of soil depth and land cover type was found to provide the most accurate streamflow predictions during a 10-year validation period. Investigation of the uncertainty associated with the predictions due to weakly identified parameters however, revealed that a simpler classification based solely on land cover actually provided a more robust parameterisation of streamflow response. The result alludes to the hydrological importance of distinguishing between forested and non-forested land cover types at the regional-scale, and suggests that given additional information soil-depth / storage considerations may also have proved significant. Improvements to the outlined method are discussed in terms of increasing the informative content available to differentiate between competing catchment responses.</p> <p style='line-height: 20px;'><b>Keywords:</b> regional-scale, spatial variability, disaggregation, hydrotype, quasi-distributed, parameterisation, uncertaint

    Advanced Methods for Dose and Regimen Finding During Drug Development: Summary of the EMA/EFPIA Workshop on Dose Finding (London 4-5 December 2014)

    Get PDF
    Inadequate dose selection for confirmatory trials is currently still one of the most challenging issues in drug development, as illustrated by high rates of late-stage attritions in clinical development and postmarketing commitments required by regulatory institutions. In an effort to shift the current paradigm in dose and regimen selection and highlight the availability and usefulness of well-established and regulatory-acceptable methods, the European Medicines Agency (EMA) in collaboration with the European Federation of Pharmaceutical Industries Association (EFPIA) hosted a multistakeholder workshop on dose finding (London 4-5 December 2014). Some methodologies that could constitute a toolkit for drug developers and regulators were presented. These methods are described in the present report: they include five advanced methods for data analysis (empirical regression models, pharmacometrics models, quantitative systems pharmacology models, MCP-Mod, and model averaging) and three methods for study design optimization (Fisher information matrix (FIM)-based methods, clinical trial simulations, and adaptive studies). Pairwise comparisons were also discussed during the workshop; however, mostly for historical reasons. This paper discusses the added value and limitations of these methods as well as challenges for their implementation. Some applications in different therapeutic areas are also summarized, in line with the discussions at the workshop. There was agreement at the workshop on the fact that selection of dose for phase III is an estimation problem and should not be addressed via hypothesis testing. Dose selection for phase III trials should be informed by well-designed dose-finding studies; however, the specific choice of method(s) will depend on several aspects and it is not possible to recommend a generalized decision tree. There are many valuable methods available, the methods are not mutually exclusive, and they should be used in conjunction to ensure a scientifically rigorous understanding of the dosing rationale

    An artificial neural network approach to recognise kinetic models from experimental data

    Get PDF
    The quantitative description of the dynamic behaviour of reacting systems requires the identification of an appropriate set of kinetic model equations. The selection of the correct model may pose substantial challenges as there may be a large number of candidate kinetic model structures. In this work, a model selection approach is presented where an Artificial Neural Network classifier is trained for recognising appropriate kinetic model structures given the available experimental evidence. The method does not require the fitting of kinetic parameters and it is well suited when there is a high number of candidate kinetic mechanisms. The approach is demonstrated on a simulated case study on the selection of a kinetic model for describing the dynamics of a three-component reacting system in a batch reactor. The sensitivity of the approach to a change in the experimental design and to a change in the system noise is assessed

    Experimental Design for Sensitivity Analysis, Optimization and Validation of Simulation Models

    Get PDF
    This chapter gives a survey on the use of statistical designs for what-if analysis in simula- tion, including sensitivity analysis, optimization, and validation/verification. Sensitivity analysis is divided into two phases. The first phase is a pilot stage, which consists of screening or searching for the important factors among (say) hundreds of potentially important factors. A novel screening technique is presented, namely sequential bifurcation. The second phase uses regression analysis to approximate the input/output transformation that is implied by the simulation model; the resulting regression model is also known as a metamodel or a response surface. Regression analysis gives better results when the simu- lation experiment is well designed, using either classical statistical designs (such as frac- tional factorials) or optimal designs (such as pioneered by Fedorov, Kiefer, and Wolfo- witz). To optimize the simulated system, the analysts may apply Response Surface Metho- dology (RSM); RSM combines regression analysis, statistical designs, and steepest-ascent hill-climbing. To validate a simulation model, again regression analysis and statistical designs may be applied. Several numerical examples and case-studies illustrate how statisti- cal techniques can reduce the ad hoc character of simulation; that is, these statistical techniques can make simulation studies give more general results, in less time. Appendix 1 summarizes confidence intervals for expected values, proportions, and quantiles, in termi- nating and steady-state simulations. Appendix 2 gives details on four variance reduction techniques, namely common pseudorandom numbers, antithetic numbers, control variates or regression sampling, and importance sampling. Appendix 3 describes jackknifing, which may give robust confidence intervals.least squares;distribution-free;non-parametric;stopping rule;run-length;Von Neumann;median;seed;likelihood ratio

    Optimal design of stimulus experiments for robust discrimination of biochemical reaction networks

    Get PDF
    Motivation: Biochemical reaction networks in the form of coupled ordinary differential equations (ODEs) provide a powerful modeling tool for understanding the dynamics of biochemical processes. During the early phase of modeling, scientists have to deal with a large pool of competing nonlinear models. At this point, discrimination experiments can be designed and conducted to obtain optimal data for selecting the most plausible model. Since biological ODE models have widely distributed parameters due to, e.g. biologic variability or experimental variations, model responses become distributed. Therefore, a robust optimal experimental design (OED) for model discrimination can be used to discriminate models based on their response probability distribution functions (PDFs). Results: In this work, we present an optimal control-based methodology for designing optimal stimulus experiments aimed at robust model discrimination. For estimating the time-varying model response PDF, which results from the nonlinear propagation of the parameter PDF under the ODE dynamics, we suggest using the sigma-point approach. Using the model overlap (expected likelihood) as a robust discrimination criterion to measure dissimilarities between expected model response PDFs, we benchmark the proposed nonlinear design approach against linearization with respect to prediction accuracy and design quality for two nonlinear biological reaction networks. As shown, the sigma-point outperforms the linearization approach in the case of widely distributed parameter sets and/or existing multiple steady states. Since the sigma-point approach scales linearly with the number of model parameter, it can be applied to large systems for robust experimental planning. Availability: An implementation of the method in MATLAB/AMPL is available at http://www.uni-magdeburg.de/ivt/svt/person/rf/roed.html. Contact: [email protected] Supplementary information: Supplementary data are are available at Bioinformatics online

    RESEARCH AND DEVELOPMENT EFFORT IN DEVELOPING THE OPTIMAL FORMULATIONS FOR NEW TABLET DRUGS

    Get PDF
    Seeking the optimal pharmaceutical formulation is considered one of the most critical research components during the drug development stage. It is also an R&D effort incorporating design of experiments and optimization techniques, prior to scaling up a manufacturing process, to determine the optimal settings of ingredients so that the desirable performance of related pharmaceutical quality characteristics (QCs) specified by the Food and Drug Administration (FDA) can be achieved. It is widely believed that process scale-up potentially results in changes in ingredients and other pharmaceutical manufacturing aspects, including site, equipment, batch size and process, with the purpose of satisfying the clinical and market demand. Nevertheless, there has not been any single comprehensive research work on how to model and optimize the pharmaceutical formulation when scale-up changes occur. Based upon the FDA guidance, the documentation tests for scale-up changes generally include dissolution comparisons and bioequivalence studies. Hence, this research proposes optimization models to ensure the equivalent performance in terms of dissolution and bioequivalence for the pre-change and post-change formulations by extending the existing knowledge of formulation optimization. First, drug professionals traditionally consider the mean of a QC only; however, the variability of the QC of interest is essential because large variability may result in unpredictable safety and efficacy issues. In order to simultaneously take into account the mean and variability of the QC, the Taguchi quality loss concept is applied to the optimization procedure. Second, the standard 2×2 crossover design, which is extensively conducted to evaluate bioequivalence, is incorporated into the ordinary experimental scheme so as to investigate the functional relationships between the characteristics relevant to bioequivalence and ingredient amounts. Third, as many associated FDA and United States Pharmacopeia regulations as possible, regarding formulation characteristics, such as disintegration, uniformity, friability, hardness, and stability, are included as constraints in the proposed optimization models to enable the QCs to satisfy all the related requirements in an efficient manner. Fourth, when dealing with multiple characteristics to be optimized, the desirability function (DF) approach is frequently incorporated into the optimization. Although the weight-based overall DF is usually treated as an objective function to be maximized, this approach has a potential shortcoming: the optimal solutions are extremely sensitive to the weights assigned and these weights are subjective in nature. Moreover, since the existing DF methods consider mean responses only, variability is not captured despite the fact that individuals may differ widely in their responses to a drug. Therefore, in order to overcome these limitations when applying the DF method to a formulation optimization problem, a priority-based goal programming scheme is proposed that incorporates modified DF approaches to account for variability. The successful completion of this research will establish a theoretically sound foundation and statistically rigorous base for the optimal pharmaceutical formulation without loss of generality. It is believed that the results from this research will have the potential to impact a wide range of tasks in the pharmaceutical manufacturing industry

    Model-based design of experiments in the presence of structural model uncertainty: an extended information matrix approach

    Get PDF
    The identification of a parametric model, once a suitable model structure is proposed, requires the estimation of its non-measurable parameters. Model-based design of experiment (MBDoE) methods have been proposed in the literature for maximising the collection of information whenever there is a limited amount of resources available for conducting the experiments. Conventional MBDoE methods do not take into account the structural uncertainty on the model equations and this may lead to a substantial miscalculation of the information in the experimental design stage. In this work, an extended formulation of the Fisher information matrix is proposed as a metric of information accounting for model misspecification. The properties of the extended Fisher information matrix are presented and discussed with the support of two simulated case studies

    Exploiting Tournament Selection-Based Genetic Algorithm in Integrated AHP-Taguchi Analyses-GA Method for Wire Electrical Discharge Machining of AZ91 Magnesium Alloy

    Get PDF
    Concurrent optimization and prioritization of wire EDM parameters can improve resource allocations in material processing and should be effective. This study advances the integrated analytic (AHP)-Taguchi(T)-tournament-based-genetic algorithm (tGA) method to moderate the influence of erroneous resource allocation in parametric analysis decisions in wire electrical discharge machining. The structure builds on the AHP-T method’s platform obtained from the literature and develops it by including the tGA while processing the AZ91 magnesium alloy. The article evaluates the delta values for the average signal-to-noise ratios in the response table and deploys them to arrive at the winners in a league and consequently mutate the chromosomes for performance improvement. The scale of relative importance, consistency index, optimal parametric setting, delta values, and ranks are all established and coupled with the total value and maximum value evaluation at the selection crossover and mutation stages of the genetic algorithm. The results at the mutation, crossover, and selection stages of the tournament selection process showed total values of 124410, 96650, and 70564, respectively. At the selection stage, the maximum value to be the winner of the tournament is 28704. The crossover operation was accomplished after the 5th, 5th, and 6th bit for the first three pairs, respectively. For the selection and crossover operations, the maximum value is 28604 and 27944, respectively. The research clarifies which parameters are the best and worst during optimization using the AHP-T-tGA method
    • …
    corecore