14 research outputs found

    New approaches for the real-time optimization of process systems under uncertainty

    Get PDF
    In the process industry, the economical operation of systems is of utmost importance for stakeholders to remain competitive. Moreover, economic incentives can be used to drive the development of sustainable processes, which must be deployed to ensure continued human and ecological welfare. In the process systems engineering paradigm, model predictive control (MPC) and real-time optimization (RTO) are methods used to achieve operational optimality; however, both methods are subject to uncertainty, which can adversely affect their performance. Along with the challenges of uncertainty, formulations of economic optimization problems are largely problem-specific as process utilities and products vary significantly by application; thus, many nascent processes have not received a tailored economic optimization treatment. In this thesis, the focus is on avenues of economic optimization under uncertainty, namely, the two-step RTO method, which updates process models via parameters; and the modifier adaptation (MA) method, which updates process models via error and gradient correction. In the case of parametric model uncertainty, the two-step RTO method is used. The parameter estimation (PE) step that accompanies RTO requires plant measurements that are often noisy, which can cause the propagation of noise to the parameter estimates and result in poor RTO performance. In the present work, a noise-abatement scheme is proposed such that high-fidelity parameter estimates are used to update a process model for economic optimization. This is achieved through parameter estimate bootstrapping to compute bounds and determine the measurement-set that results in the lowest parameter variation; thus, the scheme is dubbed low-variance parameter estimation (lv-PE). This method is shown to result in improved process economics through truer set points and reduced dynamic behaviour. In the case of structural model mismatch (i.e., unmodelled phenomena), the MA approach is used, whereby gradient modifier (i.e., correction) terms must be recursively estimated until convergence. These modifier terms require plant perturbations to be performed, which incite time-consuming plant dynamics that delay operating point updates. In cases with frequent disturbances, MA may have poor performance well as there is limited time to refine the modifiers. Herein, a partial modifier adaptation (pMA) method is proposed, which selects a subset of modifications to be made, thus reducing the number of necessary perturbations. Through this reduced experimental burden, the operating point refinement process is accelerated resulting in quicker convergence to advantageous operating points. Additionally, constraint satisfaction during this refinement process can also result in poor performance via wasted below-specification products. Accordingly, the pMA method also includes an adjustment step that can drive the system to constraint-satisfying regions at each iteration. The pMA method is shown to economically outperform both the standard MA method as well as a related directional MA method in cases with frequent periodic disturbances. The economic optimization methods described above are implemented in novel processes to improve their economics, which can incite further technological uptake. Post-combustion carbon capture (PCC) is the most advanced carbon capture technology as it has been investigated extensively. PCC takes industrial flue gases and separates the carbon dioxide for later repurposing or storage. Most PCC operating schemes make decisions using simplified models since a mechanistic PCC model is large and difficult to solve. To this end, this thesis provides the first robust MPC that can address uncertainty in PCC with a mechanistic model. The advantage of the mechanistic model in robust optimal control is that it allows for a precise treatment of uncertainties in phenomenological parameters. Using the multi-scenario approach, discrete realizations of the uncertain parameters inside a given uncertainty region can be incorporated into the controller to produce control actions that result in a robust operation in closed-loop. In the case of jointly uncertainty activity coefficients and flue gas flowrates, the proposed robust MPC is shown to lead to improved performance with respect to a nominal controller (i.e., one that does not hedge against uncertainty) under various operational scenarios. In addition to the PCC robust control problem, the mechanistic model is used for economic optimization and state estimation via RTO and moving horizon estimation (MHE) layers respectively. While the former computes economical set points, the latter uses few measurements to compute the full system state, which is necessary for the controller that uses a mechanistic model. These layers are integrated to operate the system economically via a new economic function that accounts for the most significant economic aspects of PCC, including the carbon economy, energy, chemical, and utility costs. A new proposed MPC layer is novel in its ability to enable flexible control of the plant by manipulating fresh material streams to impact CO2 capture and the MHE layer is the first to provide accurate system estimates to the controller with realistically accessible measurements. A joint MPC-MHE-RTO scheme is deployed for PCC, which is shown to lead to more economical steady-state operation compared to constant set point counterfactuals under cofiring, diurnal operation, and price variation scenarios. The lv-PE scheme is also deployed for the PCC system where it is found to improve set point economics with respect to traditional PE methods. The improvements are observed to occur through reduced emissions and more efficient energy used, thus having environmental co-benefits. Moreover, the lv-PE algorithm is used for uncertainty quantification to develop a robust RTO that leads to more conservative set points (i.e., less economic improvement) but lower set point variation (i.e., less control burden). The methodologies developed in this PhD thesis provide improvements in efficacy as well as applicability of online economic optimization in engineering applications, where uncertainty is often present. These can be deployed by both academic as well as industrial practitioners that wish to improve the economic performance on their processes

    Dynamic modeling of recirculating aquaculture systems with an integrated application of nonlinear model predictive control and moving horizon estimation

    Get PDF
    Growing concerns regarding the sustainability of the aquaculture industry has led to the development of recirculating aquaculture systems (RAS) where the addition of wastewater treatment units is accompanied by a reduction in water consumption and waste release. In this study, a mechanistic dynamic model of RAS was proposed and validated using experimental data available in the literature. Fish health is crucial to the profitability of an aquaculture facility; thus, fish performance and welfare measured in terms of growth and mortality were also incorporated within the proposed model. The model was then used to provide insights regarding the operation and management of RAS. According to the results of this analysis, continuous feeding was found to result in smaller fluctuations of waste product concentrations which is more desirable for the stability of wastewater treatment. Furthermore, under low rates of water exchange, addition of a denitrification unit to the RAS system would be necessary to avoid accumulation of nitrate. The environment used for fish growth (i.e., rearing environment) plays a significant role in feed utilization as well as fish performance. Thus, water quality parameters such as concentration of oxygen and waste components should be constantly controlled in a way to meet fish requirements. To achieve this goal, a nonlinear model predictive control (NMPC) integrated with moving horizon estimation (MHE) was implemented in this study to control RAS environment. The performance of the proposed control scheme was evaluated under full and partial accessibility to the states in the presence of process uncertainty and measurement noises. Assessment of the proposed controller was conducted by simulating the failure of unit or malfunction of a measurement device. In all scenarios, the proposed framework demonstrated the ability to maintain the water quality parameters close to target, indicating the promise of this control strategy in the closed-loop operation of RAS

    Novel Methodologies in State Estimation for Constrained Nonlinear Systems under Non-Gaussian Measurement Noise & Process Uncertainty

    Get PDF
    Chemical processes often involve scheduled/unscheduled changes in the operating conditions that may lead to non-zero mean non-Gaussian (e.g., uniform, multimodal) process uncertainties and measurement noises. Moreover, the distribution of the variables of a system subjected to process constraints may not often follow Gaussian distributions. It is essential that the state estimation schemes can properly capture the non-Gaussianity in the system to successfully monitor and control chemical plants. Kalman Filter (KF) and its extension, i.e., Extended Kalman Filter (EKF), are well-known model-driven state estimation schemes for unconstrained applications. The present thesis initially performed state estimation using this approach for an unconstrained large-scale gasifier that supports the efficiency and accuracy offered by KF. However, the underlying assumption considered in KF/EKF is that all state variables, input variables, process uncertainties, and measurement noises follow Gaussian distributions. The existing EKF-based approaches that consider constraints on the states and/or non-Gaussian uncertainties and noises require significantly larger computational costs than those observed in EKF applications. The current research aims to introduce an efficient EKF-based scheme, referred to as constrained Abridged Gaussian Sum Extended Kalman Filter (constrained AGS EKF), that can generalize EKF to perform state estimation for constrained nonlinear applications featuring non-zero mean non-Gaussian distributions. Constrained AGS-EFK uses Gaussian mixture models to approximate the non-Gaussian distributions of the constrained states, process uncertainties, and measurement noises. In the present abridged Gaussian sum framework, the main characteristics of the overall Gaussian mixture models are used to represent the distributions of the corresponding non-Gaussian variable. Constrained AGS-EKF includes new modifications in both prior and posterior estimation steps of the standard EKF to capture the non-zero mean distribution of the process uncertainties and measurement noises, respectively. These modified prior and posterior steps require the same computational costs as in EKF. Moreover, an intermediate step is considered in the constrained AGS-EKF framework that explicitly applies the constraints on the priori estimation of the distributions of the states. The additional computational costs to perform this intermediate step is relatively small when compared to the conventional approaches such as Gaussian Sum Filter (GSF). Note that the constrained AGS-EKF performs the modified EKF (consists of modified prior, intermediate, and posterior estimation steps) only once and thus, avoids additional computational costs and biased estimations often observed in GSFs. Moving Horizon Estimation (MHE) is an optimization-based state estimation approach that provides the optimal estimations of the states. Although MHE increases the required computation costs when compared to EKF, MHE is best known for the constrained applications as it can take into account all the process constraints. This PhD thesis initially provided an error analysis that shows that EKF can provide accurate estimates if it is constantly initialized by a constrained estimation scheme such as MHE (even though EKF is unconstrained state estimator). Despite the benefits provided by MHE for constrained applications, this framework assumes that the distributions the process uncertainties and measurement noises are zero-mean Gaussian, known a priori, and remain unchanged throughout the operation, i.e., known time-independent distributions, which may not be accurate set of assumptions for the real-world applications. Performing a set of MHEs (one MHE per each Gaussian component in the mixture model) more likely become computationally taxing and hence, is discouraged. Instead, the abridged Gaussian sum approach introduced in this thesis for AGS-EKF framework can be used to improve the MHE performance for the applications involving non-Gaussian random noises and uncertainties. Thus, a new extended version of MHE, i.e., referred to as Extended Moving Horizon Estimation (EMHE), is presented that makes use of the Gaussian mixture models to capture the known time-dependent non-Gaussian distributions of the process uncertainties and measurement noises use of the abridged Gaussian sum approach. This framework updates the Gaussian mixture models to represent the new characteristics of the known time-dependent distribution of noises/uncertainties upon scheduled changes in the process operation. These updates require a relatively small additional CPU time; thus making it an attractive estimation scheme for online applications in chemical engineering. Similar to the standard MHE and despite the accuracy and efficiency offered by the EMHE scheme, the application of EMHE is limited to the scenarios where the changes in the distribution of noises and uncertainties are known a priori. However, the knowledge of the distributions of measurement noises or process uncertainties may not be available a priori if any unscheduled operating changes occur during the plant operation. Motivated by this aspect, a novel robust version of MHE, referred to as Robust Moving Horizon Estimation (RMHE), is introduced that improves the robustness and accuracy of the estimation by modelling online the unknown distributions of the measurement noises or process uncertainties. The RMHE problem involves additional constraints and decision variables than the standard MHE and EMHE problems to provide optimal Gaussian mixture models that represent the unknown distributions of the random noises or uncertainties along with the optimal estimated states. The additional constraints in the RMHE problem do not considerably increase the required computational costs than that needed in the standard MHE and consequently, both the present RMHE and the standard MHE require somewhat similar CPU time on average to provide the point estimates. The methodologies developed through this PhD thesis offers efficient MHE-based and EKF-based frameworks that significantly improve the performance of these state estimation schemes for practical chemical engineering applications

    Optimization of refinery preheat trains undergoing fouling: control, cleaning scheduling, retrofit and their integration

    Get PDF
    Crude refining is one of the most energy intensive industrial operations. The large amounts of crude processed, various sources of inefficiencies and tight profit margins promote improving energy recovery. The preheat train, a large heat exchanger network, partially recovers the energy of distillation products to heat the crude, but it suffers of the deposition of material over time – fouling – deteriorating its performance. This increases the operating cost, fuel consumption, carbon emissions and may reduce the production rate of the refinery. Fouling mitigation in the preheat train is essential for a profitable long term operation of the refinery. It aims to increase energy savings, and to reduce operating costs and carbon emissions. Current alternatives to mitigate fouling are based on heuristic approaches that oversimplify the representation of the phenomena and ignore many important interactions in the system, hence they fail to fully achieve the potential energy savings. On the other hand, predictive first principle models and mathematical programming offer a comprehensive way to mitigate fouling and optimize the performance of preheat trains overcoming previous limitations. In this thesis, a novel modelling and optimization framework for heat exchanger networks under fouling is proposed, and it is based on fundamental principles. The models developed were validated against plant data and other benchmark models, and they can predict with confidence the main effect of operating variables on the hydraulic and thermal performance of the exchangers and those of the network. The optimization of the preheat train, an MINLP problem, aims to minimize the operating cost by: 1) dynamic flow distribution control, 2) cleaning scheduling and 3) network retrofit. The framework developed allows considering these decisions individually or simultaneously, although it is demonstrated that an integrated approach exploits the synergies among decision levels and can reduce further the operating cost. An efficient formulation of the model disjunctions and time representation are developed for this optimization problem, as well as efficient solution strategies. To handle the combinatorial nature of the problem and the many binary decisions, a reformulation using complementarity constraints is proposed. Various realistic case studies are used to demonstrate the general applicability and benefits of the modelling and optimization framework. This is the first time that first principle predictive models are used to optimize various types of decisions simultaneously in industrial size heat exchanger networks. The optimization framework developed is taken further to an online application in a feedback loop. A multi-loop NMPC approach is designed to optimize the flow distribution and cleaning scheduling of preheat trains over two different time scales. Within this approach, dynamic parameter estimation problems are solved at frequent intervals to update the model parameters and cope with variability and uncertainty, while predictive first principle models are used to optimize the performance of the network over a future horizon. Applying this multi-loop optimization approach to a case study of a real refinery demonstrates the importance of considering process variability on deciding about optimal fouling mitigation approaches. Uncertainty and variability have been ignored in all previous model based fouling mitigation strategies, and this novel multi-loop NMPC approach offers a solution to it so that the economic savings are enhanced. In conclusion, the models and optimization algorithms developed in this thesis have the potential to reduce the operating cost and carbon emission of refining operations by mitigating fouling. They are based on accurate models and deterministic optimization that overcome the limitations of previous applications such as poor predictability, ignoring variability and dynamics, ignoring interactions in the system, and using inappropriate tools for decision making.Open Acces

    Advanced decision support through real-time optimization in the process industry

    Get PDF
    En la industria de procesos se puede obtener un aumento de la eficiencia de las plantas de producción, bien mediante la sustitución de procesos o equipos antiguos por otros más modernos y eficientes, o bien operando de forma más eficiente las instalaciones actuales en lugar de realizar grandes inversiones con tiempos de amortización inciertos. Si nos centramos en esta segunda línea de acción, hoy en día la toma de decisiones es conceptualmente más compleja que en el pasado, debido al rápido crecimiento que ha tenido la tecnología últimamente y a que los sistemas de comunicación han generado un gran número de alternativas entre las que se ha de elegir. Además, una decisión incorrecta o subóptima, con la complejidad estructural de los problemas actuales, a menudo resulta en un aumento de los costes a lo largo de la cadena de producción. A pesar de ello, el uso de sistemas de apoyo a la toma de decisiones (DSS) sigue siendo atípico en las industrias de procesos debido a los esfuerzos que se requieren en términos de desarrollo y mantenimiento de modelos matemáticos y al desafío de formulaciones matemáticas complejas, los exigentes requisitos computacionales y/o la difícil integración con la infraestructura de control o planificación existente. Esta tesis contribuye en la reducción de estas barreras desarrollando formulaciones eficientes para la optimización en tiempo real (RTO) en una planta industrial. En particular, esta tesis busca mejorar la operación de tres secciones interconectadas de una fábrica de producción de fibra de viscosa: una red de evaporación, una de sistema de enfriamiento y una red de recuperación de calor.Departamento de Ingeniería de Sistemas y AutomáticaDoctorado en Ingeniería Industria

    Integration of design and control for large-scale applications: a back-off approach

    Get PDF
    Design and control are two distinct aspects of a process that are inherently related though these aspects are often treated independently. Performing a sequential design and control strategy may lead to poor control performance or overly conservative and thus expensive designs. Unsatisfactory designs stem from neglecting the connection of choices made at the process design stage that affects the process dynamics. Integration of design and control introduces the opportunity to establish a transparent link between steady-state economics and dynamic performance at the early stages of the process design that enables the identification of reliable and optimal designs while ensuring feasible operation of the process under internal and external disruptions. The dynamic nature of the current global market drives industries to push their manufacturing strategies to the limits to achieve a sustainable and optimal operation. Hence, the integration of design and control plays a crucial role in constructing a sustainable process since it increases the short and long-term profits of industrial processes. Simultaneous process design and control often results in challenging computationally intensive and complex problems, which can be formulated conceptually as dynamic optimization problems. The size and complexity of the conceptual integrated problem impose a limitation on the potential solution strategies that could be implemented on large-scale industrial systems. Thus far, the implementation of integration of design and methodologies on large-scale applications is still challenging and remains as an open question. The back-off approach is one of the proposed methodologies that relies on steady-state economics to initiate the search for optimal and dynamically feasible process design. The idea of the surrogate model is combined with the back-off approach in the current research as the key technique to propose a practical and systematic method for the integration of design and control for large-scale applications. The back-off approach featured with power series expansions (PSEs) is developed and extended to achieve multiple goals. The proposed back-off method focuses on searching for the optimal design and control parameters by solving a set of optimization problems using PSE functions. The idea is to search for the optimal direction in the optimization variables by solving a series of bounded PSE-based optimization problems. The approach is a sequential approximate optimization method in which the system is evaluated around the worst-case variability expected in process outputs. Hence, using PSE functions instead of the actual nonlinear dynamic process model at each iteration step reduces the computational effort. The method mostly traces the closest feasible and near-optimal solution to the initial steady-state condition considering the worst-case scenario. The term near-optimal refers to the potential deviations from the original locally optimum due to the approximation techniques considered in this work. A trust-region method has been developed in this research to tackle simultaneous design and control of large-scale processes under uncertainty. In the initial version of the back-off approach proposed in this research, the search space region in the PSE-based optimization problem was specified a priori. Selecting a constant search space for the PSE functions may undermine the convergence of the methodology since the predictions of the PSEs highly depend on the nominal conditions used to develop the corresponding PSE functions. Thus, an adaptive search space for individual PSE-optimization problems at every iteration step is proposed. The concept has been designed in a way that certifies the competence of the PSE functions at each iteration and adapts the search space of the optimization as the iteration proceeds in the algorithm. Metrics for estimating the residuals such as the mean of squared errors (MSE) are employed to quantify the accuracy of the PSE approximations. Search space regions identified by this method specify the boundaries of the decision variables for the PSE-based optimization problems. Finding a proper search region is a challenging task since the nonlinearity of the system at different nominal conditions may vary significantly. The procedure moves towards a descent direction and at the convergence point, it can be shown that it satisfies first-order KKT conditions. The proposed methodology has been tested on different case studies involving different features. Initially, an existent wastewater treatment plant is considered as a primary medium-scale case study in the early stages of the development of the methodology. The wastewater treatment plant is also used to investigate the potential benefits and capabilities of a stochastic version of the back-off methodology. Furthermore, the results of the proposed methodology are compared to the formal integration approach in a dynamic programming framework for the medium-scale case study. The Tennessee Eastman (TE) process is selected as a large-scale case study to explore the potentials of the proposed method. The results of the proposed trust-region methodology have been compared to previously reported results in the literature for this plant. The results indicate that the proposed methodology leads to more economically attractive and reliable designs while maintaining the dynamic operability of the system in the presence of disturbances and uncertainty. Therefore, the proposed methodology shows a significant accomplishment in locating dynamically feasible and near-optimal design and operating conditions thus making it attractive for the simultaneous design and control of large-scale and highly nonlinear plants under uncertainty

    Model-Based Closed-Loop Glucose Control in Critical Illness

    Get PDF
    Stress hyperglycemia is a common complication in critically ill patients and is associated with increased mortality and morbidity. Tight glucose control (TGC) has shown promise in reducing mean glucose levels in critically ill patients and may mitigate the harmful repercussions of stress hyperglycemia. Despite the promise of TGC, care must be taken to avoid hypoglycemia, which has been implicated in the failure of some previous clinical attempts at TGC using intensive insulin therapies. In fact, a single hypoglycemic event has been shown to result in worsened patient outcomes. The nature of tight glucose regulation lends itself to automatic monitoring and control, thereby reducing the burden on clinical staff. A blood glucose target range of 110-130 mg/dL has been identified in the High-Density Intensive Care (HIDENIC) database at the University of Pittsburgh Medical Center (UPMC). A control framework comprised of a zone model predictive controller (zMPC) with moving horizon estimation (MHE) is proposed to maintain euglycemia in critically ill patients. Using continuous glucose monitoring (CGM) the proposed control scheme calculates optimized insulin and glucose infusion to maintain blood glucose concentrations within the target zone. Results from an observational study employing continuous glucose monitors at UPMC are used to reconstruct blood glucose from noisy CGM data, identify a model of CGM error in critically ill patients, and develop an in silico virtual patient cohort. The virtual patient cohort recapitulates expected physiologic trends with respect to insulin sensitivity and glycemic variability. Furthermore, a mechanism is introduced utilizing proportional-integral-derivative (PID) to modulate basal pancreatic insulin secretion rates in virtual patients. The result is virtual patients who behave realistically in simulated oral glucose tolerance tests and insulin tolerance tests and match clinically observed responses. Finally, in silico trials are used to simulate clinical conditions and test the developed control system under realistic conditions. Under normal conditions the control system is able to tightly control glucose concentrations within the target zone while avoiding hypoglycemia. To safely counteract the effect of faulty CGMs a system to detect sensor error and request CGM recalibration is introduced. Simulated in silico tests of this system results in accurate detection of excessive error leading to higher quality control and hypoglycemia reduction

    Integration of Process Design, Scheduling, and Control Via Model Based Multiparametric Programming

    Get PDF
    The conventional approach to assess the multiscale operational activities sequentially often leads to suboptimal solutions and even interruptions in the manufacturing process due to the inherent differences in the objectives of the individual constituent problems. In this work, integration of the traditionally isolated process design, scheduling, and control problems is investigated by introducing a multiparametric programming-based framework, where all decision layers are based on a single high fidelity model. The overall problem is dissected into two constituent parts, namely (i) design and control, and (ii) scheduling and control problems. The proposed framework was first assessed on these constituent subproblems, followed by the implementation on the overall problem. The fundamental steps of the framework consists of (i) developing design dependent offline control and scheduling strategies, and (ii) exact implementation of these offline rolling horizon strategies in a mixed-integer dynamic optimization problem for the optimal design. The design dependence of the offline operational strategies allows for the integrated problem to consider the design, scheduling, and control problems simultaneously. The proposed framework is showcased on (i) a binary distillation column for the separation of toluene and benzene, (ii) a system of two continuous stirred tank reactor, (iii) a small residential heat and power network, and (iv) two batch reactor systems. Furthermore, a novel algorithm for large scale multiparametric programming problems is proposed to solve the classes of problems frequently encountered as a result of the integration of rolling horizon strategies

    Implementation and performance assessment of a real-time optimization system on a virtual fluidized-bed catalytic-cracking plant

    Get PDF
    This thesis develops and evaluates RTO implementation in a FCCU virtual plant, taking into account each RTO stage (noise elimination, steady-state detection, data validation, parameter estimation, and optimization). The dynamic data to carry out this analysis were obtained from an FCCU virtual plant based on a dynamic deterministic model developed in Matlab®. The model output data were contaminated with Gaussian and gross errors to simulate measurements from a real plant. For denoising, steady-state detection, data reconciliation, parameter estimation, and optimization, different strategies and algorithms were studied and assessed, while a decentralized PID was proposed for the control system. Finally, the most appropriate strategies for the case study were implemented and their performance was fully evaluated.Resumen: Esta tesis desarrolla y evalúa la implementación de la RTO en una planta virtual de FCCU, teniendo en cuenta cada etapa de una RTO (eliminación de ruido, detección de estado estable, validación de datos, estimación de parámetros y optimización). Los datos dinámicos para llevar a cabo este análisis se obtuvieron de una planta virtual de FCCU basada en un modelo determinista dinámico desarrollado en Matlab®. Los datos de salida del modelo se contaminaron con error de Gauss y error grueso para simular mediciones de una planta real. Para la eliminación de ruido, la detección de estado estable, la reconciliación de datos, la estimación de parámetros y la optimización, se estudiaron y evaluaron diferentes estrategias y algoritmos, mientras que para el sistema de control se propuso un PID descentralizado. Finalmente, se implementaron las estrategias más apropiadas para el estudio de caso y se evaluó su desempeño en conjunto.Maestrí
    corecore