12 research outputs found

    Sensitivity Analysis of a Physiochemical Interaction Model Undergoing Changes in the Initial Condition and Duration of Experiment Time

    Get PDF
    The mathematical modelling of physiochemical interactions in the framework of industrial and environmental physics usually relies on an initial value problem which is described by a single first order ordinary differential equation. In this analysis, we will study the sensitivity analysis due to a variation of the initial condition and experimental time. These results which we have not seen elsewhere are analysed and discussed quantitatively

    The Role of Numerical Methods in the Sensitivity Analysis of a Density Parameter in a Passivation Rate Interaction

    Get PDF
    The mathematical modelling of physiochemical interaction in the framework of industrial and environmental physics which relies on an initial value problem is defined by a first order ordinary differential equation. Two numerical methods of studying sensitivity analysis of physiochemical interaction data are developed. This mathematical technique is used to investigate the extent of the sensitivity of the density parameter over a time interva

    Emulsion copolymerization of styrene and butyl acrylate in the presence of a chain transfer agent. Part 2: parameters estimability and confidence regions

    Get PDF
    Accurate estimation of the model parameters is required to obtain reliable predictions of the products end-use properties. However, due to the mathematical model structure and/or to a possible lack of measurements, the estimation of some parameters may be impossible. This paper will focus on the case where the main limitations to the parameters estimability are their weak effect on the measured outputs or the correlation between the effects of two or more parameters. The objective of the method developed in this paper is to determine the subset of the most influencing parameters that can be estimated from the available experimental data, when the complete set of model parameters cannot be estimated. This approach has been applied to the mathematical model of the emulsion copolymerization of styrene and butyl acrylate, in the presence of n-dodecyl mercaptan as a chain transfer agent. In addition, a new approach is used to better assess the true confidence regions and evaluate the accuracy of the parameters estimates in more reliable way

    A framework for model reliability and estimability analysis of crystallization processes with multi-impurity multi-dimensional population balance models

    Get PDF
    The development of reliable mathematical models for crystallization processes may be very challenging due the complexity of the underlying phenomena, the inherent Population Balance Models (PBMs) and the large number of parameters that need to be identified from experimental data. Due to the poor information content of the experiments, the structure of the model itself and correlation between model parameters, the mathematical model may contain more parameters than can be accurately and reliably identified from the available experimental data. A novel framework for parameter estimability for guaranteed optimal model reliability is proposed then validated by a complex crystallization process. The latter is described by a differential algebraic system which involves a multi-dimensional population balance model that accounts for the combined effects of different crystal growth modifiers/impurities on the crystal size and shape distribution of needle-like crystals. Two estimability methods were combined: the first is based on a sequential orthogonalization of the local sensitivity matrix and the second is Sobol, a variance-based global sensitivities technic. The framework provides a systematic way to assess the quality of two nominal sets of parameters: one obtained from prior knowledge and the second obtained by simultaneous identification using global optimization. A cut-off value was identified from an incremental least square optimization procedure for both estimability methods, providing the required optimal subset of model parameters. The implemented methodology showed that, although noisy aspect ratio data were used, the 8 most influential and least correlated parameters could be reliably identified out of twenty-three, leading to a crystallization model with enhanced prediction capability

    Computational and mathematical modelling of plant species interactions in a harsh climate

    Get PDF
    This thesis will consider the following assumptions which are based on a few insights about the artic climate: (1)the artic climate can be characterised by a growing season called summer and a dormat season called winter (2)in the summer season growing conditions are reasonably favourable and species are more likely to compete for plentiful resources (3)in the winter season there would be no further growth and the plant populations would instead by subjected to fierce weather events such as storms which is more likely to lead to the destruction of some or all of the biomass. Under these assumptions, is it possible to find those change in the environment that might cause mutualism (see section 1.9.2) from competition (see section 1.9.1) to change? The primary aim of this thesis to to provide a prototype simulation of growth of two plant species in the artic that: (1)take account of different models for summer and winter seasons (2)permits the effects of changing climate to be seen on each type of plant species interaction

    Parameter Estimation of Complex Systems from Sparse and Noisy Data

    Get PDF
    Mathematical modeling is a key component of various disciplines in science and engineering. A mathematical model which represents important behavior of a real system can be used as a substitute for the real process for many analysis and synthesis tasks. The performance of model based techniques, e.g. system analysis, computer simulation, controller design, sensor development, state filtering, product monitoring, and process optimization, is highly dependent on the quality of the model used. Therefore, it is very important to be able to develop an accurate model from available experimental data. Parameter estimation is usually formulated as an optimization problem where the parameter estimate is computed by minimizing the discrepancy between the model prediction and the experimental data. If a simple model and a large amount of data are available then the estimation problem is frequently well-posed and a small error in data fitting automatically results in an accurate model. However, this is not always the case. If the model is complex and only sparse and noisy data are available, then the estimation problem is often ill-conditioned and good data fitting does not ensure accurate model predictions. Many challenges that can often be neglected for estimation involving simple models need to be carefully considered for estimation problems involving complex models. To obtain a reliable and accurate estimate from sparse and noisy data, a set of techniques is developed by addressing the challenges encountered in estimation of complex models, including (1) model analysis and simplification which identifies the important sources of uncertainty and reduces the model complexity; (2) experimental design for collecting information-rich data by setting optimal experimental conditions; (3) regularization of estimation problem which solves the ill-conditioned large-scale optimization problem by reducing the number of parameters; (4) nonlinear estimation and filtering which fits the data by various estimation and filtering algorithms; (5) model verification by applying statistical hypothesis test to the prediction error. The developed methods are applied to different types of models ranging from models found in the process industries to biochemical networks, some of which are described by ordinary differential equations with dozens of state variables and more than a hundred parameters

    Model-based optimization of batch- and continuous crystallization processes

    Get PDF
    Crystallization is an important separation process, extensively used in most chemical industries and especially in pharmaceutical manufacturing, either as a method of production or as a method of purification or recovery of solids. Typically, crystallization can have a considerable impact on tuning the critical quality attributes (CQAs), such as crystal size and shape distribution (CSSD), purity and polymorphic form, that impact the final product quality performance indicators and inherent end-use properties, along with the downstream processability. Therefore, one of the critical targets in controlled crystallization processes, is to engineer specific properties of the final product. The purpose of this research is to develop systematic computer-aided methodologies for the design of batch and continuous mixed suspension mixed product removal (MSMPR) crystallization processes through the implementation of simulation models and optimization frameworks. By manipulating the critical process parameters (CPPs), the achievable range of CQAs and the feasible design space (FDS) can be identified. Paracetamol in water and potassium dihydrogen phosphate (KDP) in water are considered as the model chemical systems.The studied systems are modeled utilizing single and multi-dimensional population balance models (PBMs). For the batch crystallization systems, single and multi-objective optimization was carried out for the determination of optimal operating trajectories by considering mean crystal size, the distribution s standard deviation and the aspect ratio of the population of crystals, as the CQAs represented in the objective functions. For the continuous crystallization systems, the attainable region theory is employed to identify the performance of multi-stage MSMPRs for various operating conditions and configurations. Multi-objective optimization is also applied to determine a Pareto optimal attainable region with respect to multiple CQAs. By identifying the FDS of a crystallization system, the manufacturing capabilities of the process can be explored, in terms of mode of operation, CPPs, and equipment configurations, that would lead to the selection of optimum operation strategies for the manufacturing of products with desired CQAs under certain manufacturing and supply chain constraints. Nevertheless, developing reliable first principle mathematical models for crystallization processes can be very challenging due to the complexity of the underlying phenomena, inherent to population balance models (PBMs). Therefore, a novel framework for parameter estimability for guaranteed optimal model reliability is also proposed and implemented. Two estimability methods are combined and compared: the first is based on a sequential orthogonalization of the local sensitivity matrix and the second is Sobol, a variance-based global sensitivities technic. The framework provides a systematic way to assess the quality of two nominal sets of parameters: one obtained from prior knowledge and the second obtained by simultaneous identification using global optimization. A multi-dimensional population balance model that accounts for the combined effects of different crystal growth modifiers/ impurities on the crystal size and shape distribution of needle-like crystals was used to validate the methodology. A cut-off value is identified from an incremental least square optimization procedure for both estimability methods, providing the required optimal subset of model parameters. In addition, a model-based design of experiments (MBDoE) methodology approach is also reported to determine the optimal experimental conditions yielding the most informative process data. The implemented methodology showed that, although noisy aspect ratio data were used, the eight most influential and least correlated parameters could be reliably identified out of twenty-three, leading to a crystallization model with enhanced prediction capability. A systematic model-based optimization methodology for the design of crystallization processes under the presence of multiple impurities is also investigated. Supersaturation control and impurity inclusion is combined to evaluate the effect on the product's CQAs. To this end, a morphological PBM is developed for the modelling of the cooling crystallization of pure KDP in aqueous solution, as a model system, under the presence of two competitive crystal growth modifiers/ additives: aluminum sulfate and sodium hexametaphosphate. The effect of the optimal temperature control with and without the additives on the CQAs is presented via utilizing multi-objective optimization. The results indicate that the attainable size and shape attributes, can be considerably enhanced due to advanced operation flexibility. Especially it is shown that the shape of the KDP crystals can be affected even by the presence of small quantity of additives and their morphology can be modified from needle-like to spherical, which is more favourable for processing. In addition, the multi-impurity PBM model is extended by the utilization of a high-resolution finite volume (HR-FV) scheme, instead of the standard method of moments (SMOM), in order for the full reconstruction and dynamic modelling of the crystal size and shape distribution to be enabled. The implemented methodology illustrated the capabilities of utilizing high-fidelity computational models for the investigation of crystallization processes in impure media for process and product design and optimization purposes

    Run-to-Run Optimization of Biochemical Batch Processes in the Presence of Model-Plant Mismatch

    Get PDF
    An increased demand for novel pharmaceuticals such as recombinant proteins with therapeutic potential has lead to significant advances in the operation of biotechnological processes. In general, biochemical processes are characterized by nonlinear behavior and a sensitivity to environmental conditions. Furthermore, due to their complex operation, exposure to contamination and the low volume of the obtained product, these processes are generally still frequently operated in batch or fed-batch reactors. The repetitive nature of batch processes motivates the use of previous experimental effort to improve the performance of future batch operations. In this way, a so-called run-to-run optimization can be performed where the measurements of the current batch-run are utilized to determine the input for the next experiment. To conduct this step in a systematic and reliable manner, fundamental process models can be used for prediction and optimization purposes. This way, it is possible to determine the input for the next iteration from the predicted optimum obtained by calibrating a model based on measurements from the current batch-run. Fundamental models are typically derived from the underlying physical phenomena of the process. However, to make these models useful and tractable, it is common to make assumptions and simplifications during the model development. As a result, there often exists mismatch between the model and process under study. In the presence of model-plant mismatch, the set of model parameter estimates, which satisfy an identification objective, may not result in an accurate prediction of the gradients of the cost-function and constraints, which are essential for optimization. To still ensure convergence to the optimum, the method of simultaneous identification and optimization aims at forcing the predicted gradients to match the measured gradients by adapting the model parameters. At the same time, a correction factor is introduced into the model output so that the previously achieved fitting accuracy can be maintained. This results in a set of model parameters that reconcile the objectives of identification and optimization in presence of model-plant mismatch. Although the method provides the potential for dealing with structural error in iterative optimization schemes, there exist several challenges that have to be addressed before it is applicable to more complex systems. For example, when dealing with models containing a large number of parameters, it is unclear which parameters should be selected for calibration and adaptation. Since updating all available parameters is impractical due to estimability problems and over-fitting, there is a motivation for adapting only a subset of parameters. Furthermore, for this method to be more efficient under uncertainty, it is necessary to introduce additional robustness to uncertainty in initial conditions and gradient measurements. Finally, it is essential to develop experimental design criteria that will provide the user with more informative experiments to speed up convergence to the optimum and to calibrate the model with better accuracy. Following the above, this work presents the following new contributions: (i) An algorithmic approach to select a subset of parameters based on the sensitivities of the model outputs as well as of the cost function and constraint gradients. (ii) A run-to-run optimization formulation that is robust to uncertainties in initial batch conditions based on polynomial chaos expansions that are used to quantify the uncertainty and to propagate it onto the optimization cost. (iii) A modified parameter identification objective based on the minimization of the ratio of the sum of squared prediction errors to a parametric sensitivity measure to speed up convergence of the run-to-run procedure. (iv) The use of uncertainty bounds on the predicted trajectories to ensure model accuracy while solving the parameter identification problem described in item (iii) and to determine whether a model update is necessary at any given run. (v) The use of a design of experiments approach within the run-to-run optimization procedure to optimally complement the cost gradient information that is already available from previous batch experiments. The presented methods are shown to be efficient and facilitate the use of complex models for run-to-run optimization of batch processes. Several case studies of cell culture processes are presented to illustrate the improvements in robustness and performance. These case studies involve batch, fed-batch and perfusion operations. A part of this work has been developed in collaboration with an industrial partner whose main line of business is the development of perfusion growth media for mammalian cell culture operations

    Computational and mathematical modelling of plant species interactions in a harsh climate

    Get PDF
    This thesis will consider the following assumptions which are based on a few insights about the artic climate: (1)the artic climate can be characterised by a growing season called summer and a dormat season called winter (2)in the summer season growing conditions are reasonably favourable and species are more likely to compete for plentiful resources (3)in the winter season there would be no further growth and the plant populations would instead by subjected to fierce weather events such as storms which is more likely to lead to the destruction of some or all of the biomass. Under these assumptions, is it possible to find those change in the environment that might cause mutualism (see section 1.9.2) from competition (see section 1.9.1) to change? The primary aim of this thesis to to provide a prototype simulation of growth of two plant species in the artic that: (1)take account of different models for summer and winter seasons (2)permits the effects of changing climate to be seen on each type of plant species interaction.EThOS - Electronic Theses Online ServiceGBUnited Kingdo
    corecore