3,972 research outputs found

    Iterative learning control of crystallisation systems

    Get PDF
    Under the increasing pressure of issues like reducing the time to market, managing lower production costs, and improving the flexibility of operation, batch process industries thrive towards the production of high value added commodity, i.e. specialty chemicals, pharmaceuticals, agricultural, and biotechnology enabled products. For better design, consistent operation and improved control of batch chemical processes one cannot ignore the sensing and computational blessings provided by modern sensors, computers, algorithms, and software. In addition, there is a growing demand for modelling and control tools based on process operating data. This study is focused on developing process operation data-based iterative learning control (ILC) strategies for batch processes, more specifically for batch crystallisation systems. In order to proceed, the research took a step backward to explore the existing control strategies, fundamentals, mechanisms, and various process analytical technology (PAT) tools used in batch crystallisation control. From the basics of the background study, an operating data-driven ILC approach was developed to improve the product quality from batch-to-batch. The concept of ILC is to exploit the repetitive nature of batch processes to automate recipe updating using process knowledge obtained from previous runs. The methodology stated here was based on the linear time varying (LTV) perturbation model in an ILC framework to provide a convergent batch-to-batch improvement of the process performance indicator. In an attempt to create uniqueness in the research, a novel hierarchical ILC (HILC) scheme was proposed for the systematic design of the supersaturation control (SSC) of a seeded batch cooling crystalliser. This model free control approach is implemented in a hierarchical structure by assigning data-driven supersaturation controller on the upper level and a simple temperature controller in the lower level. In order to familiarise with other data based control of crystallisation processes, the study rehearsed the existing direct nucleation control (DNC) approach. However, this part was more committed to perform a detailed strategic investigation of different possible structures of DNC and to compare the results with that of a first principle model based optimisation for the very first time. The DNC results in fact outperformed the model based optimisation approach and established an ultimate guideline to select the preferable DNC structure. Batch chemical processes are distributed as well as nonlinear in nature which need to be operated over a wide range of operating conditions and often near the boundary of the admissible region. As the linear lumped model predictive controllers (MPCs) often subject to severe performance limitations, there is a growing demand of simple data driven nonlinear control strategy to control batch crystallisers that will consider the spatio-temporal aspects. In this study, an operating data-driven polynomial chaos expansion (PCE) based nonlinear surrogate modelling and optimisation strategy was presented for batch crystallisation processes. Model validation and optimisation results confirmed this approach as a promise to nonlinear control. The evaluations of the proposed data based methodologies were carried out by simulation case studies, laboratory experiments and industrial pilot plant experiments. For all the simulation case studies a detailed mathematical models covering reaction kinetics and heat mass balances were developed for a batch cooling crystallisation system of Paracetamol in water. Based on these models, rigorous simulation programs were developed in MATLAB®, which was then treated as the real batch cooling crystallisation system. The laboratory experimental works were carried out using a lab scale system of Paracetamol and iso-Propyl alcohol (IPA). All the experimental works including the qualitative and quantitative monitoring of the crystallisation experiments and products demonstrated an inclusive application of various in situ process analytical technology (PAT) tools, such as focused beam reflectance measurement (FBRM), UV/Vis spectroscopy and particle vision measurement (PVM) as well. The industrial pilot scale study was carried out in GlaxoSmithKline Bangladesh Limited, Bangladesh, and the system of experiments was Paracetamol and other powdered excipients used to make paracetamol tablets. The methodologies presented in this thesis provide a comprehensive framework for data-based dynamic optimisation and control of crystallisation processes. All the simulation and experimental evaluations of the proposed approaches emphasised the potential of the data-driven techniques to provide considerable advances in the current state-of-the-art in crystallisation control

    Batch-to-Batch Iterative Learning Control of a Fed-Batch Fermentation Process

    Get PDF
    In this work, Iterative Learning Control on a fed-batch fermentation process using linearised models has been studied. The repetitive nature of batch processes enables ILC to obtain information from a previous batch in order to improve the performance of the current batch such that the product quality converges asymptotically to the desired trajectory The basic batch to batch ILC law presents the control action of a current batch as a summation of the control action from the previous batch and the deviation of the output trajectory from the desired reference trajectory incorporation with a learning rate. In a bid to address the issue of the process non-linearity, the control policy and the output trajectory were linearised around their respective nominal trajectories. The linearised models were then identified using Multi Linear Regression (MLR), Principal Component Analysis (PCR) and Partial Least Squares (PLS). In order to curb the effects of plant-model mismatches and process variations, the linearised models were reidentified after each batch operation. This was done by selecting the immediate previous batch as the nominal batch and then adding the recently obtained process data into the historical data batch on completion of the current batch run. The weighting matrices in the objective function were carefully selected taking into consideration that they have a major influence on the robust performance of the process. In using PLS and PCR models the issue of process collinearity was effectively addressed. The proposed batch to batch ILC strategy was applied to a simulated fed-batch fermentation process for the production of secreted protein. The results of the optimal control policy were comparable to that obtained in using full mechanistic model. ILC, a simple but yet an effective optimal control strategy has demonstrated to be a viable option in complex processes such as batch processes where mechanistic models are difficult to develop. Keywords: Iterative Learning Control, batch process, fed-batch fermentation, batch to batch ILC, control policy. DOI: 10.7176/CMR/14-3-02 Publication date:August 31st 202

    Batch-to-batch iterative learning control of a fed-batch fermentation process

    Get PDF
    PhD ThesisRecently, iterative learning control (ILC) has been used in the run-to-run control of batch processes to directly update the control trajectory. The basic idea of ILC is to update the control trajectory for a new batch run using the information from previous batch runs so that the output trajectory converges asymptotically to the desired reference trajectory. The control policy updating is calculated using linearised models around the nominal reference process input and output trajectories. The linearised models are typically identified using multiple linear regression (MLR), partial least squares (PLS) regression, or principal component regression (PCR). ILC has been shown to be a promising method to address model-plant mismatches and unknown disturbances. This work presents several improvements of batch to batch ILC strategy with applications to a simulated fed-batch fermentation process. In order to enhance the reliability of ILC, model prediction confidence is incorporated in the ILC optimization objective function. As a result of the incorporation, wide model prediction confidence bounds are penalized in order to avoid unreliable control policy updating. This method has been proven to be very effective for selected model prediction confidence bounds penalty factors. In the attempt to further improve the performance of ILC, averaged reference trajectories and sliding window techniques were introduced. To reduce the influence of measurement noise, control policy is updated on the average input and output trajectories of the past a few batches instead of just the immediate previous batch. The linearised models are re-identified using a sliding window of past batches in that the earliest batch is removed with the newest batch added to the model identification data set. The effects of various parameters were investigated for MLR, PCR and PLS method. The technique significantly improves the control performance. In model based ILC the weighting matrices, Q and R, in the objective function have a significant impact on the control performance. Therefore, in the quest to exploit the potential of objective function, adaptive weighting parameters were attempted to study the performance of batch to batch ILC with updated models. Significant improvements in the stability of the performance for all the three methods were noticed. All the three techniques suggested have established improvements either in stability, reliability and/or convergence speed. To further investigate the versatility of ILC, the above mentioned techniques were combined and the results are discussed in this thesis

    Transfer learning for batch process optimal control using LV-PTM and adaptive control strategy

    Get PDF
    In this study, we investigate a data-driven optimal control for a new batch process. Existing data-driven optimal control methods often ignore an important problem, namely, because of the short operation time of the new batch process, the modeling data in the initial stage can be insufficient. To address this issue, we introduce the idea of transfer learning, i.e., a latent variable process transfer model (LV-PTM) is adopted to transfer sufficient data and process information from similar processes to a new one to assist its modeling and quality optimization control. However, due to fluctuations in raw materials, equipment, etc., differences between similar batch processes are always inevitable, which lead to the serious and complicated mismatch of the necessary condition of optimality (NCO) between the new batch process and the LV-PTM-based optimization problem. In this work, we propose an LV-PTM-based batch-to-batch adaptive optimal control strategy, which consists of three stages, to ensure the best optimization performance during the whole operation lifetime of the new batch process. This adaptive control strategy includes model updating, data removal, and modifier-adaptation methodology using final quality measurements in response. Finally, the feasibility of the proposed method is demonstrated by simulations

    A Factor Graph Approach to Automated Design of Bayesian Signal Processing Algorithms

    Get PDF
    The benefits of automating design cycles for Bayesian inference-based algorithms are becoming increasingly recognized by the machine learning community. As a result, interest in probabilistic programming frameworks has much increased over the past few years. This paper explores a specific probabilistic programming paradigm, namely message passing in Forney-style factor graphs (FFGs), in the context of automated design of efficient Bayesian signal processing algorithms. To this end, we developed "ForneyLab" (https://github.com/biaslab/ForneyLab.jl) as a Julia toolbox for message passing-based inference in FFGs. We show by example how ForneyLab enables automatic derivation of Bayesian signal processing algorithms, including algorithms for parameter estimation and model comparison. Crucially, due to the modular makeup of the FFG framework, both the model specification and inference methods are readily extensible in ForneyLab. In order to test this framework, we compared variational message passing as implemented by ForneyLab with automatic differentiation variational inference (ADVI) and Monte Carlo methods as implemented by state-of-the-art tools "Edward" and "Stan". In terms of performance, extensibility and stability issues, ForneyLab appears to enjoy an edge relative to its competitors for automated inference in state-space models.Comment: Accepted for publication in the International Journal of Approximate Reasonin
    • …
    corecore