1,062 research outputs found

    Lazy global feedbacks for quantized nonlinear event systems

    Full text link
    We consider nonlinear event systems with quantized state information and design a globally stabilizing controller from which only the minimal required number of control value changes along the feedback trajectory to a given initial condition is transmitted to the plant. In addition, we present a non-optimal heuristic approach which might reduce the number of control value changes and requires a lower computational effort. The constructions are illustrated by two numerical examples

    Iterative learning control of crystallisation systems

    Get PDF
    Under the increasing pressure of issues like reducing the time to market, managing lower production costs, and improving the flexibility of operation, batch process industries thrive towards the production of high value added commodity, i.e. specialty chemicals, pharmaceuticals, agricultural, and biotechnology enabled products. For better design, consistent operation and improved control of batch chemical processes one cannot ignore the sensing and computational blessings provided by modern sensors, computers, algorithms, and software. In addition, there is a growing demand for modelling and control tools based on process operating data. This study is focused on developing process operation data-based iterative learning control (ILC) strategies for batch processes, more specifically for batch crystallisation systems. In order to proceed, the research took a step backward to explore the existing control strategies, fundamentals, mechanisms, and various process analytical technology (PAT) tools used in batch crystallisation control. From the basics of the background study, an operating data-driven ILC approach was developed to improve the product quality from batch-to-batch. The concept of ILC is to exploit the repetitive nature of batch processes to automate recipe updating using process knowledge obtained from previous runs. The methodology stated here was based on the linear time varying (LTV) perturbation model in an ILC framework to provide a convergent batch-to-batch improvement of the process performance indicator. In an attempt to create uniqueness in the research, a novel hierarchical ILC (HILC) scheme was proposed for the systematic design of the supersaturation control (SSC) of a seeded batch cooling crystalliser. This model free control approach is implemented in a hierarchical structure by assigning data-driven supersaturation controller on the upper level and a simple temperature controller in the lower level. In order to familiarise with other data based control of crystallisation processes, the study rehearsed the existing direct nucleation control (DNC) approach. However, this part was more committed to perform a detailed strategic investigation of different possible structures of DNC and to compare the results with that of a first principle model based optimisation for the very first time. The DNC results in fact outperformed the model based optimisation approach and established an ultimate guideline to select the preferable DNC structure. Batch chemical processes are distributed as well as nonlinear in nature which need to be operated over a wide range of operating conditions and often near the boundary of the admissible region. As the linear lumped model predictive controllers (MPCs) often subject to severe performance limitations, there is a growing demand of simple data driven nonlinear control strategy to control batch crystallisers that will consider the spatio-temporal aspects. In this study, an operating data-driven polynomial chaos expansion (PCE) based nonlinear surrogate modelling and optimisation strategy was presented for batch crystallisation processes. Model validation and optimisation results confirmed this approach as a promise to nonlinear control. The evaluations of the proposed data based methodologies were carried out by simulation case studies, laboratory experiments and industrial pilot plant experiments. For all the simulation case studies a detailed mathematical models covering reaction kinetics and heat mass balances were developed for a batch cooling crystallisation system of Paracetamol in water. Based on these models, rigorous simulation programs were developed in MATLAB®, which was then treated as the real batch cooling crystallisation system. The laboratory experimental works were carried out using a lab scale system of Paracetamol and iso-Propyl alcohol (IPA). All the experimental works including the qualitative and quantitative monitoring of the crystallisation experiments and products demonstrated an inclusive application of various in situ process analytical technology (PAT) tools, such as focused beam reflectance measurement (FBRM), UV/Vis spectroscopy and particle vision measurement (PVM) as well. The industrial pilot scale study was carried out in GlaxoSmithKline Bangladesh Limited, Bangladesh, and the system of experiments was Paracetamol and other powdered excipients used to make paracetamol tablets. The methodologies presented in this thesis provide a comprehensive framework for data-based dynamic optimisation and control of crystallisation processes. All the simulation and experimental evaluations of the proposed approaches emphasised the potential of the data-driven techniques to provide considerable advances in the current state-of-the-art in crystallisation control

    Sensitivitätsanalyse und robustes Prozessdesign pharmazeutischer Herstellungsprozesse

    Get PDF
    The existence of parameter uncertainties(PU) limits model-based process design techniques. It also hinders the modernization of pharmaceutical manufacturing processes, which is necessitated for intensified market competition and Quality by Design (QbD) principles. Thus, in this thesis, proper approaches are proposed for efficient and effective sensitivity analysis and robust design of pharmaceutical processes. Moreover, the point estimate method (PEM) and polynomial chaos expansion (PCE) are further implemented for uncertainty propagation and quantification (UQ) in the proposed approaches. Global sensitivity analysis (GSA) provides quantitative measures on the influence of PU on process outputs over the entire parameter domain. Two GSA techniques are presented in detail and computed with the PCE. The results from case studies show that GSA is able to quantify the heterogeneity of the information in PU and model structure and parameter dependencies affects significantly the final GSA result as well as output variation. Frameworks for robust process design are introduced to alleviate the adverse effect of PU on process performance. The first robust design framework is developed based on the PEM. The proposed approach has high computational efficiency and is able to take parameter dependencies into account. Then, a novel approach, in which the Gaussian mixture distribution (GMD) concept is combined with PEM, is proposed to handle non-Gaussian distribution. The resulting GMD-PEM concept provides a better trade-off between process efficiency and probability of constraint violations than other approaches. The second robust design framework is based on the iterative back-off strategy and PCE. It provides designs with the desired robustness, while the associated computational expense is independent from the optimization problem. The decoupling of optimization and UQ provides the possibility of implementing robust process design to more complex pharmaceutical manufacturing processes with large number of PU. In this thesis, the case studies include unit operations for (bio)chemical synthesis, separation (crystallization) and formulation (freeze-drying), which cover the complete production chain of pharmaceutical manufacturing. Results from the case studies reveal the significant impact of PU on process design. Also they show the efficiency and effectiveness of the proposed frameworks regarding process performance and robustness in the context of QbD.Die pharmazeutische Industrie muss sowohl den gestiegenen Wettbewerbsdruck standhalten als auch die von Regulierungsbehörden geforderte QbD-Initiative (Quality by Design) umsetzen. Modellgestützte Verfahren können einen signifikanten Beitrag leisten, aber Parameterunsicherheiten (PU) erschweren jedoch eine zuverlässige modellgestützte Prozessauslegung. Das Ziel dieser Arbeit ist daher die Erforschung von effizienten Approaches zur Sensitivitätsanalyse und robusten Prozessdesign der pharmazeutische Industrie. Methoden, Point Estimate Method (PEM) und Polynomial Chaos Expansion (PCE), wurde implementiert, um effizient Unsicherheitenquantifizierung (UQ) zu erlauben. Der globalen Sensitivitätsanalyse (GSA) ist eine systematische Quantifizierung von Parameterschwankungen auf die Simulationsergebnisse. Zwei GSA Techniken werden im Detail vorgestellt und an Beispielen demonstriert. Die Ergebnisse zeigen sowohl den Mehrwert der GSA im Kontext des robusten Prozessdesigns als auch die Relevanz zur korrekten Berücksichtigung von Parameterkorrelationen bei der GSA. Um den schädlichen Einfluss von PU auf die modellgestützte Prozessauslegung zusätzlich zu minimieren, wurden weitere Konzepte aus der robusten Optimierung untersucht. Zunächst wurde das erste Konzept basierend auf der PEM entwickelt. Das erste Konzept zeigt einen deutlich reduzierte Rechenaufwand und kann auch die Parameterkorrelationen entsprechend in der robusten Prozessauslegung berücksichtigen. In einem zweiten Schritt wurde ein neuer Ansatz, der die Gauß-Mischverteilung mit der PEM kombiniert, hierzu für nicht normalverteilte PU erfolgreich implementiert. Weiterhin wurde eine iterative Back-off-Strategie erforscht, die auch die PU entsprechend berücksichtigt aber leichte Rechenaufwand zeigt. Durch die Entkoppelung von UQ und Optimierung können wesentlich komplexere pharmazeutische Herstellungsprozesse mit einer hohen Anzahl an PU implementiert werden. Die in dieser Arbeit untersuchten verfahrenstechnische Grundoperationen decken somit einen Großteil der gesamten Produktionskette der pharmazeutischen Herstellung ab. Die Ergebnisse der untersuchten Beispiele zeigen deutlich den Einfluss von PU auf das modellgestützte Prozessdesign auf. Mithilfe der vorgeschlagenen Approaches können die PU effektiv und effizient bei einer optimalen Balance von Rechenaufwand und der geforderten Zuverlässigkeit ganz im QbD-Sinne berücksichtigt werden

    Towards data-driven stochastic predictive control

    Full text link
    Data-driven predictive control based on the fundamental lemma by Willems et al. is frequently considered for deterministic LTI systems subject to measurement noise. However, little has been done on data-driven stochastic control. In this paper, we propose a data-driven stochastic predictive control scheme for LTI systems subject to possibly unbounded additive process disturbances. Based on a stochastic extension of the fundamental lemma and leveraging polynomial chaos expansions, we construct a data-driven surrogate Optimal Control Problem (OCP). Moreover, combined with an online selection strategy of the initial condition of the OCP, we provide sufficient conditions for recursive feasibility and for stability of the proposed data-driven predictive control scheme. Finally, two numerical examples illustrate the efficacy and closed-loop properties of the proposed scheme for process disturbances governed by different distributions

    Uncertainty Analysis and Robust Optimization of a Single Pore in a Heterogeneous Catalytic Flow Reactor System

    Get PDF
    Catalytic systems are crucial to a wide number of chemical production processes, and as a result there is significant demand to develop novel catalyst materials and to optimize existing catalytic reactor systems. These optimization and design studies are most readily implemented using model-based approaches, which require less time and fewer resources than the alternative experimental-based approaches. The behaviour of a catalytic reactor system can be captured using multiscale modeling approaches that combine continuum transport equations with kinetic modeling approaches such as kinetic Monte Carlo (kMC) or the mean-field (MF) approximation in order to model the relevant reactor phenomena on the length and time scales on which they occur. These multiscale modeling approaches are able to accurately capture the reactor behaviour and can be readily implemented to perform robust optimization and process improvement studies on catalytic reaction systems. The problem with multiscale-based optimization of catalytic reactor systems, however, is that this is still an emerging field and there still remain a number of challenges that hinder these methods. One such challenge involves the computational cost. Multiscale modeling approaches can be computationally-intensive, which limit their application to model-based optimization processes. These computational burdens typically stem from the use of fine-scale models that lack closed-form expressions, such as kMC. A second common challenge involves model-plant mismatch, which can hinder the accuracy of the model. This mismatch stems from uncertainty in the reaction pathways and from difficulties in obtaining the values of the system parameters from experimental results. In addition, the uncertainty in catalytic flow reactor systems can vary in space due to kinetic events not taken into consideration by the multiscale model, such as non-uniform catalyst deactivation due to poisoning and fouling mechanisms. Failure to adequately account for model-plant mismatch can result in substantial deviations from the predicted catalytic reactor performance and significant losses in reactor efficiency. Furthermore, uncertainty propagation techniques can be computationally intensive and can further increase the computational demands of the multiscale models. Based on the above challenges, the objective of this research is to develop and implement efficient strategies that study the effects of parametric uncertainty in key parameters on the performance of a multiscale single-pore catalytic reactor system and subsequently to implement them to perform robust and dynamic optimization on the reactor system subject to uncertainty. To this end, low-order series expansions such as Polynomial Chaos Expansion (PCE) and Power Series Expansion (PSE) were implemented in order to efficiently propagate parametric uncertainty through the multiscale reactor model. These uncertainty propagation techniques were used to perform extensive uncertainty analyses on the catalytic reactor system in order to observe the impact of parametric uncertainty in various key system parameters on the catalyst reactor performance. Subsequently, these tools were implemented into robust optimization formulations that sought to maximize the reactor productivity and minimize the variability in the reactor performance due to uncertainty. The results highlight the significant effect of parametric uncertainty on the reactor performance and illustrate how they can be accommodated for when performing robust optimization. In order to assess the impact of spatially-varying uncertainty due to catalyst deactivation on the catalytic reactor system, the uncertainty propagation techniques were applied to evaluate and compare the effects of spatially-constant and spatially-varying uncertainty distributions. To accommodate for the spatially-varying uncertainty, unique uncertainty descriptions were applied to each uncertain parameter at discretized points across the reactor length. The uncertainty comparison was furthermore extended through application to robust optimization. To reduce the computational cost, statistical data-driven models (DDMs) were identified to approximate the key statistical parameters (mean, variance, and probabilistic bounds) of the reactor output variability for each uncertainty distribution. The DDMs were incorporated into robust optimization formulations that aimed to maximize the reactor productivity subject to uncertainty and minimize the uncertainty-induced output variability. The results demonstrate the impact of spatially-varying parametric uncertainty on the catalytic reactor performance. They also highlight the importance of its inclusion to adequately account for phenomena such as catalyst fouling in robust optimization and process improvement studies. The dynamic behaviour of the catalytic reactor system was similarly assessed within this work to evaluate the effects of uncertainty on the reactor performance as it evolves in time and space. For this study, uncertainty analysis was performed on a transient multiscale catalytic reactor model subject to changes in the system temperature. These results were used to formulate robust dynamic optimization studies to maximize the transient catalytic reactor behaviour. These studies aimed to determine the optimal temperature trajectories that maximize the reactor’s performance under uncertainty. Dynamic optimization was also implemented to identify the optimal design and operating policies that allow the reactor, under spatially-varying uncertainty, to meet targeted performance specifications within a level of confidence. These studies illustrate the benefits of performing dynamic optimization to improve performance for multiscale process systems under uncertainty
    corecore