103 research outputs found

    Advances in optimal design and retrofit of chemical processes with uncertain parameters - Applications in design of heat exchanger networks

    Get PDF
    There is widespread consensus that the omnipresent climate crisis demands humanity to rapidly reduce global greenhouse gas (GHG) emissions.To allow for such a rapid reduction, the industrial sector as a main contributor to GHG emissions needs to take immediate actions. To mitigate GHG emissions from the industrial sector, increasing energy efficiency as well as fuel and feedstock switching, such as increased use of biomass and (green) electricity, are the options which can have most impact in the short- and medium-term.Such mitigation options usually create a need for design of new or redesign of existing processes such as the plant energy systems.The design and operation of industrial plants and processes are usually subject to uncertainty, especially in the process industry. This uncertainty can have different origins, e.g., process parameters such as flow rates or transfer coefficients may vary (uncontrolled) or may not be known exactly.This thesis proposes theoretical and methodological developments for designing and/or redesigning chemical processes which are subject to uncertain operating conditions, with a special focus on heat recovery systems such as heat exchanger networks.In this context, this thesis contributes with theoretical development in the field of deterministic flexibility analysis.More specifically, new approaches are presented to enhance the modelling of the expected uncertainty space, i.e., the space in which the uncertain parameters are expected to vary.Additionally, an approach is presented to perform (deterministic) flexibility analysis in situations when uncertain long-term development such as a switch in feedstocks interferes with operational short-term disturbances.In this context, the thesis presents an industrial case study to i) show the need for such a theoretical development, and ii) illustrate the applicability.Aside of advances in deterministic flexibility analysis, this thesis also explores the possibility to combine valuable designer input (e.g. non-quantifiable knowledge) with the efficiency of mathematical programming when addressing a design under uncertainty problem.More specifically, this thesis proposes to divide the design under uncertainty problem into a design synthesis step which allows direct input from the designer, and several subsequent steps which are summarized in a framework presented in this thesis.The proposed framework combines different approaches from the literature with the theoretical development presented in this thesis, and aims to identify the optimal design specifications which also guarantee that the the final design can operate at all expected operating conditions.The design synthesis step and the framework are decoupled from each other which allows the approach to be applied to large and complex industrial case studies with acceptable computational effort.Usage of the proposed framework is illustrated by means of an industrial case study which presents a design under uncertainty problem

    Flexibility analysis using boundary functions for considering dependencies in uncertain parameters

    Get PDF
    In this work, we present a novel approach for considering dependencies (often called correlations) in the uncertain parameters when performing (deterministic) flexibility analysis. Our proposed approach utilizes (linear) boundary functions to approximate the observed or expected distribution of operating points (i.e. uncertainty space), and can easily be integrated in the flexibility index or flexibility test problem. In contrast to the hyperbox uncertainty sets commonly used in deterministic flexibility analysis, uncertainty sets based on boundary functions allow subsets of the hyperbox which limit the flexibility metric but in which no operation is observed or expected, to be excluded. We derive a generic mixed-integer formulation for the flexibility index based on uncertainty sets defined by boundary functions, and suggest an algorithm to identify boundary functions which approximate the uncertainty set with high accuracy. The approach is tested and compared in several examples including an industrial case study

    Optimal Design and Operation of Heat Exchanger Network

    Get PDF
    Heat exchanger networks (HENs) are the backbone of heat integration due to their ability in energy and environmental managements. This thesis deals with two issues on HENs. The first concerns with designing of economically optimal Heat exchanger network (HEN) whereas the second focus on optimal operation of HEN in the presence of uncertainties and disturbances within the network. In the first issue, a pinch technology based optimal HEN design is firstly implemented on a 3–streams heat recovery case study to design a simple HEN and then, a more complex HEN is designed for a coal-fired power plant retrofitted with CO2 capture unit to achieve the objectives of minimising energy penalty on the power plant due to its integration with the CO2 capture plant. The benchmark in this case study is a stream data from (Khalilpour and Abbas, 2011). Improvement to their work includes: (1) the use of economic data to evaluate achievable trade-offs between energy, capital and utility cost for determination of minimum temperature difference; (2) redesigning of the HEN based on the new minimum temperature difference and (3) its comparison with the base case design. The results shows that the energy burden imposed on the power plant with CO2 capture is significantly reduced through HEN leading to utility cost saving maximisation. The cost of addition of HEN is recoverable within a short payback period of about 2.8 years. In the second issue, optimal HEN operation considering range of uncertainties and disturbances in flowrates and inlet stream temperatures while minimizing utility consumption at constant target temperatures based on self-optimizing control (SOC) strategy. The new SOC method developed in this thesis is a data-driven SOC method which uses process data collected overtime during plant operation to select control variables (CVs). This is in contrast to the existing SOC strategies in which the CV selection requires process model to be linearized for nonlinear processes which leads to unaccounted losses due to linearization errors. The new approach selects CVs in which the necessary condition of optimality (NCO) is directly approximated by the CV through a single regression step. This work was inspired by Ye et al., (2013) regression based globally optimal CV selection with no model linearization and Ye et al., (2012) two steps regression based data-driven CV selection but with poor optimal results due to regression errors in the two steps procedures. The advantage of this work is that it doesn’t require evaluation of derivatives hence CVs can be evaluated even with commercial simulators such as HYSYS and UNISIM from among others. The effectiveness of the proposed method is again applied to the 3-streams HEN case study and also the HEN for coal-fired power plant with CO2 capture unit. The case studies show that the proposed methodology provides better optimal operation under uncertainties when compared to the existing model-based SOC techniques

    Assessing plant design with regards to MPC performance using a novel multi-model prediction method

    Get PDF
    Model Predictive Control (MPC) is nowadays ubiquitous in the chemical industry and offers significant advantages over standard feedback controllers. Notwithstanding, projects of new plants are still being carried out without assessing how key design decisions, e.g., selection of production route, plant layout and equipment, will affect future MPC performance. The problem addressed in this Thesis is comparing the economic benefits available for different flowsheets through the use of MPC, and thus determining if certain design choices favour or hinder expected profitability. The Economic MPC Optimisation (EMOP) index is presented to measure how disturbances and restrictions affect the MPC’s ability to deliver better control and optimisation. To the author’s knowledge, the EMOP index is the first integrated design and control methodology to address the problem of zone constrained MPC with economic optimisation capabilities (today's standard in the chemical industry). This approach assumes the availability of a set of linear state-space models valid within the desired control zone, which is defined by the upper and lower bounds of each controlled and manipulated variable. Process economics provides the basis for the analysis. The index needs to be minimised in order to find the most profitable steady state within the zone constraints towards which the MPC is expected to direct the process. An analysis of the effects of disturbances on the index illustrates how they may reduce profitability by restricting the ability of an MPC to reach dynamic equilibrium near process constraints, which in turn increases product quality giveaway and costs. Hence the index monetises the required control effort. Since linear models were used to predict the dynamic behaviour of chemical processes, which often exhibit significant nonlinearity, this Thesis also includes a new multi-model prediction method. This new method, called Simultaneous Multi-Linear Prediction (SMLP), presents a more accurate output prediction than the use of single linear models, keeping at the same time much of their numerical advantages and their relative ease of obtainment. Comparing the SMLP to existing multi-model approaches, the main novelty is that it is built by defining and updating multiple states simultaneously, thus eliminating the need for partitioning the state-input space into regions and associating with each region a different state update equation. Each state’s contribution to the overall output is obtained according to the relative distance between their identification point, i.e., the set of operating conditions at which an approximation of the nonlinear model is obtained, and the current operating point, in addition to a set of parameters obtained through regression analysis. Additionally, the SMLP is built upon data obtained from step response models that can be obtained by commercial, black-box dynamic simulators. These state-of-the-art simulators are the industry’s standard for designing large-scale plants, the focus of this Thesis. Building an SMLP system yields an approximation of the nonlinear model, whose full set of equations is not of the user’s knowledge. The resulting system can be used for predictive control schemes or integrated process design and control. Applying the SMLP to optimisation problems with linear restrictions results in convex problems that are easy to solve. The issue of model uncertainty was also addressed for the EMOP index and SMLP systems. Due to the impact of uncertainty, the index may be defined as a numeric interval instead of a single number, within which the true value lies. A case of study consisting of four alternative designs for a realistically sized crude oil atmospheric distillation plant is provided in order to demonstrate the joint use and applicability of both the EMOP index and the SMLP. In addition, a comparison between the EMOP index and a competing methodology is presented that is based on a case study consisting of the activated sludge process of a wastewater treatment plant

    Optimization of refinery preheat trains undergoing fouling: control, cleaning scheduling, retrofit and their integration

    Get PDF
    Crude refining is one of the most energy intensive industrial operations. The large amounts of crude processed, various sources of inefficiencies and tight profit margins promote improving energy recovery. The preheat train, a large heat exchanger network, partially recovers the energy of distillation products to heat the crude, but it suffers of the deposition of material over time – fouling – deteriorating its performance. This increases the operating cost, fuel consumption, carbon emissions and may reduce the production rate of the refinery. Fouling mitigation in the preheat train is essential for a profitable long term operation of the refinery. It aims to increase energy savings, and to reduce operating costs and carbon emissions. Current alternatives to mitigate fouling are based on heuristic approaches that oversimplify the representation of the phenomena and ignore many important interactions in the system, hence they fail to fully achieve the potential energy savings. On the other hand, predictive first principle models and mathematical programming offer a comprehensive way to mitigate fouling and optimize the performance of preheat trains overcoming previous limitations. In this thesis, a novel modelling and optimization framework for heat exchanger networks under fouling is proposed, and it is based on fundamental principles. The models developed were validated against plant data and other benchmark models, and they can predict with confidence the main effect of operating variables on the hydraulic and thermal performance of the exchangers and those of the network. The optimization of the preheat train, an MINLP problem, aims to minimize the operating cost by: 1) dynamic flow distribution control, 2) cleaning scheduling and 3) network retrofit. The framework developed allows considering these decisions individually or simultaneously, although it is demonstrated that an integrated approach exploits the synergies among decision levels and can reduce further the operating cost. An efficient formulation of the model disjunctions and time representation are developed for this optimization problem, as well as efficient solution strategies. To handle the combinatorial nature of the problem and the many binary decisions, a reformulation using complementarity constraints is proposed. Various realistic case studies are used to demonstrate the general applicability and benefits of the modelling and optimization framework. This is the first time that first principle predictive models are used to optimize various types of decisions simultaneously in industrial size heat exchanger networks. The optimization framework developed is taken further to an online application in a feedback loop. A multi-loop NMPC approach is designed to optimize the flow distribution and cleaning scheduling of preheat trains over two different time scales. Within this approach, dynamic parameter estimation problems are solved at frequent intervals to update the model parameters and cope with variability and uncertainty, while predictive first principle models are used to optimize the performance of the network over a future horizon. Applying this multi-loop optimization approach to a case study of a real refinery demonstrates the importance of considering process variability on deciding about optimal fouling mitigation approaches. Uncertainty and variability have been ignored in all previous model based fouling mitigation strategies, and this novel multi-loop NMPC approach offers a solution to it so that the economic savings are enhanced. In conclusion, the models and optimization algorithms developed in this thesis have the potential to reduce the operating cost and carbon emission of refining operations by mitigating fouling. They are based on accurate models and deterministic optimization that overcome the limitations of previous applications such as poor predictability, ignoring variability and dynamics, ignoring interactions in the system, and using inappropriate tools for decision making.Open Acces

    Integration of design and NMPC-based control of processes under uncertainty

    Get PDF
    The implementation of a Nonlinear Model Predictive Control (NMPC) scheme for the integration of design and control demands the solution of a complex optimization formulation, in which the solution of the design problem depends on the decisions from a lower tier problem for the NMPC. This formulation with two decision levels is known as a bilevel optimization problem. The solution of a bilevel problem using traditional Linear Problem (LP), Nonlinear Problem (NLP) or Mixed-Integer Nonlinear Problem (MINLP) solvers is very difficult. Moreover, the bilevel problem becomes particularly complex if uncertainties or discrete decisions are considered. Therefore, the implementation of alternative methodologies is necessary for the solution of the bilevel problem for the integration of design and NMPC-based control. The lack of studies and practical methodologies regarding the integration of design and NMPC-based control motivates the development of novel methodologies to address the solution of the complex formulation. A systematic methodology has been proposed in this research to address the integration of design and control involving NMPC. This method is based on the determination of the amount of back-off necessary to move the design and control variables from an optimal steady-state design to a new dynamically feasible and economic operating point. This method features the reduction of complexity of the bilevel formulation by approximating the problem in terms of power series expansion (PSE) functions, which leads to a single-level problem formulation. These functions are obtained around the point that shows the worst-case variability in the process dynamics. This approximated PSE-based optimization model is easily solved with traditional NLP solvers. The method moves the decision variables for design and control in a systematic fashion that allows to accommodate the worst-case scenario in a dynamically feasible operating point. Since approximation techniques are implemented in this methodology, the feasible solutions potentially may have deviations from a local optimum solution. A transformation methodology has been implemented to restate the bilevel problem in terms of a single-level mathematical program with complementarity constraints (MPCC). This single-level MPCC is obtained by restating the optimization problem for the NMPC in terms of its conditions for optimality. The single-level problem is still difficult to solve; however, the use of conventional NLP or MINLP solvers for the search of a solution to the MPCC problem is possible. Hence, the implementation of conventional solvers provides guarantees for optimality for the MPCC’s solution. Nevertheless, an optimal solution for the MPCC-based problem may not be an optimal solution for the original bilevel problem. The introduction of structural decisions such as the arrangement of equipment or the selection of the number of process units requires the solution of formulations involving discrete decisions. This PhD thesis proposes the implementation of a discrete-steepest descent algorithm for the integration of design and NMPC-based control under uncertainty and structural decisions following a naturally ordered sequence, i.e., structural decisions that follow the order of the natural numbers. In this approach, the corresponding mixed-integer bilevel problem (MIBLP) is transformed first into a single-level mixed-integer nonlinear program (MINLP). Then, the MINLP is decomposed into an integer master problem and a set of continuous sub-problems. The set of problems is solved systematically, enabling exploration of the neighborhoods defined by subsets of integer variables. The search direction is determined by the neighbor that produces the largest improvement in the objective function. As this method does not require the relaxation of integer variables, it can determine local solutions that may not be efficiently identified using conventional MINLP solvers. To compare the performance of the proposed discrete-steepest descent approach, an alternative methodology based on the distributed stream-tray optimization (DSTO) method is presented. In that methodology, the integer variables are allowed to be continuous variables in a differentiable distribution function (DDF). The DDFs are derived from the discretization of Gaussian distributions. This allows the solution of a continuous formulation (i.e., a NLP) for the integration of design and NMPC-based control under uncertainty and structural decisions naturally ordered set. Most of the applications for the integration of design and control implement direct transcription approaches for the solution of the optimization formulation, i.e., the full discretization of the optimization problem is implemented. In chemical engineering, the most widely used discretization strategy is orthogonal collocation on finite elements (OCFE). OCFE offers adequate accuracy and numerical stability if the number of collocation points and the number of finite elements are properly selected. For the discretization of integrated design and control formulations, the selection of the number of finite elements is commonly decided based on a priori simulations or process heuristics. In this PhD study, a novel methodology for the selection and refinement of the number of finite elements in the integration of design and control framework is presented. The corresponding methodology implements two criteria for the selection of finite elements, i.e., the estimation of the collocation error and the Hamiltonian function profile. The Hamiltonian function features to be continuous and constant over time for autonomous systems; nevertheless, the Hamiltonian function shows a nonconstant profile for underestimated discretization meshes. The methodology systematically adds or removes finite elements depending on the magnitude of the estimated collocation error and the fluctuations in the profile for the Hamiltonian function. The proposed methodologies have been tested on different case studies involving different features. An existent wastewater treatment plan is considered to illustrate the implementation of back-off strategy. On the other hand, a reaction system with two continuous stirred reaction tanks (CSTRs) are considered to illustrate the implementation of the MPCC-based formulation for design and control. The D-SDA approach is tested for the integration of design, NMPC-based control, and superstructure of a binary distillation column. Lastly, a reaction system illustrates the effect of the selection and refinement of the discretization mesh in the integrated design and control framework. The results show that the implementation of NMPC controllers leads to more economically attractive process designs with improved control performance compared to applications with classical descentralized PID or Linear MPC controllers. The discrete-steepest descent approach allowed to skip sub-optimal solution regions and led to more economic designs with better control performance than the solutions obtained with the benchmark methodology using DDFs. Meanwhile, the refinement strategy for the discretization of integrated design and control formulations demonstrated that attractive solutions with improved control performance can be obtained with a reduced number of finite elements

    Integration of Process Design, Scheduling, and Control Via Model Based Multiparametric Programming

    Get PDF
    The conventional approach to assess the multiscale operational activities sequentially often leads to suboptimal solutions and even interruptions in the manufacturing process due to the inherent differences in the objectives of the individual constituent problems. In this work, integration of the traditionally isolated process design, scheduling, and control problems is investigated by introducing a multiparametric programming-based framework, where all decision layers are based on a single high fidelity model. The overall problem is dissected into two constituent parts, namely (i) design and control, and (ii) scheduling and control problems. The proposed framework was first assessed on these constituent subproblems, followed by the implementation on the overall problem. The fundamental steps of the framework consists of (i) developing design dependent offline control and scheduling strategies, and (ii) exact implementation of these offline rolling horizon strategies in a mixed-integer dynamic optimization problem for the optimal design. The design dependence of the offline operational strategies allows for the integrated problem to consider the design, scheduling, and control problems simultaneously. The proposed framework is showcased on (i) a binary distillation column for the separation of toluene and benzene, (ii) a system of two continuous stirred tank reactor, (iii) a small residential heat and power network, and (iv) two batch reactor systems. Furthermore, a novel algorithm for large scale multiparametric programming problems is proposed to solve the classes of problems frequently encountered as a result of the integration of rolling horizon strategies

    Modelling and predictive control techniques for building heating systems

    Get PDF
    Model predictive control (MPC) has often been referred to in literature as a potential method for more efficient control of building heating systems. Though a significant performance improvement can be achieved with an MPC strategy, the complexity introduced to the commissioning of the system is often prohibitive. Models are required which can capture the thermodynamic properties of the building with sufficient accuracy for meaningful predictions to be made. Furthermore, a large number of tuning weights may need to be determined to achieve a desired performance. For MPC to become a practicable alternative, these issues must be addressed. Acknowledging the impact of the external environment as well as the interaction of occupants on the thermal behaviour of the building, in this work, techniques have been developed for deriving building models from data in which large, unmeasured disturbances are present. A spatio-temporal filtering process was introduced to determine estimates of the disturbances from measured data, which were then incorporated with metaheuristic search techniques to derive high-order simulation models, capable of replicating the thermal dynamics of a building. While a high-order simulation model allowed for control strategies to be analysed and compared, low-order models were required for use within the MPC strategy itself. The disturbance estimation techniques were adapted for use with system-identification methods to derive such models. MPC formulations were then derived to enable a more straightforward commissioning process and implemented in a validated simulation platform. A prioritised-objective strategy was developed which allowed for the tuning parameters typically associated with an MPC cost function to be omitted from the formulation by separation of the conflicting requirements of comfort satisfaction and energy reduction within a lexicographic framework. The improved ability of the formulation to be set-up and reconfigured in faulted conditions was shown

    Simultaneous Design and Control of Chemical Plants: A Robust Modelling Approach

    Get PDF
    This research work presents a new methodology for the simultaneous design and control of chemical processes. One of the most computationally demanding tasks in the integration of process control and process design is the search for worst case scenarios that result in maximal output variability or in process variables being at their constraint limits. The key idea in the current work is to find these worst scenarios by using tools borrowed from robust control theory. To apply these tools, the closed-loop dynamic behaviour of the process to be designed is represented as a robust model. Accordingly, the process is mathematically described by a nominal linear model with uncertain model parameters that vary within identified ranges of values. These robust models, obtained from closed-loop identification, are used in the present method to test the robust stability of the process and to estimate bounds on the worst deviations in process variables in response to external disturbances. The first approach proposed to integrate process design and process control made use of robust tools that are based on the Quadratic Lyapunov Function (QLF). These tests require the identification of an uncertain state space model that is used to evaluate the process asymptotic stability and to estimate a bound (γ) on the random-mean squares (RMS) gain of the model output variability. This last bound is used to assess the worst-case process variability and to evaluate bounds on the deviations in process variables that are to be kept within constraints. Then, these robustness tests are embedded within an optimization problem that seeks for the optimal design and controller tuning parameters that minimize a user-specified cost function. Since the value of γ is a bound on one standard deviation of the model output variability, larger multiples of this value, e.g. 2γ, 3γ, were used to provide more realistic bounds on the worst deviations in process variables. This methodology (γ-based) was applied to the simultaneous design and control of a mixing tank process. Although this approach resulted in conservative designs, it posed a nonlinear constrained optimization problem that required less computational effort than that required by a Dynamic Programming approach which had been the main method previously reported in the literature. While the γ-based robust performance criterion provides a random-mean squares measure of the variability, it does not provide information on the worst possible deviation. In order to search for the worst deviation, the present work proposed a new robust variability measure based on the Structured Singular Value (SSV) analysis, also known as the μ-analysis. The calculation of this measure also returns the critical time-dependent profile in the disturbance that generates the maximum model output error. This robust measure is based on robust finite impulse response (FIR) closed-loop models that are directly identified from simulations of the full nonlinear dynamic model of the process. As in the γ-based approach, the simultaneous design and control of the mixing tank problem was considered using this new μ-based methodology. Comparisons between the γ-based and the μ-based strategies were discussed. Also, the computational time required to assess the worst-case process variability by the proposed μ-based method was compared to that required by a Dynamic Programming approach. Similarly, the expected computational burden required by this new μ-based robust variability measure to estimate the worst-case variability for large-scale processes was assessed. The results show that this new robust variability tool is computationally efficient and it can be potentially implemented to achieve the simultaneous design and control of chemical plants. Finally, the Structured Singular Value-based (μ-based) methodology was used to perform the simultaneous design and control of the Tennessee Eastman (TE) process. Although this chemical process has been widely studied in the Process Systems Engineering (PSE) area, the integration of design and control of this process has not been previously studied. The problem is challenging since it is open-loop unstable and exhibits a highly nonlinear dynamic behaviour. To assess the contributions of different sections of the TE plant to the overall costs, two optimization scenarios were considered. The first scenario considered only the reactor’s section of the TE process whereas the second scenario analyzed the complete TE plant. To study the interactions between design and control in the reactor’s section of the plant, the effect of different parameters on the resulting design and control schemes were analyzed. For this scenario, an alternative calculation of the variability was considered whereby this variability was obtained from numerical simulations of the worst disturbance instead of using the analytical μ-based bound. Comparisons between the analytical bound based strategy and the simulation based strategy were discussed. Additionally, a comparison of the computational effort required by the present solution strategy and that required by a Dynamic Programming based approach was conducted. Subsequently, the topic of parameter uncertainty was investigated. Specifically, uncertainty in the reaction rate coefficient was considered in the analysis of the TE problem. Accordingly, the optimization problem was expanded to account for a set of different values of the reaction rate constant. Due to the complexity associated with the second scenario, the effect of uncertainty in the reaction constant was only studied for the first scenario corresponding to the optimization of the reactor section. The results obtained from this research project show that Dynamic Programming requires a CPU time that is almost two orders of magnitude larger than that required by the methodology proposed here. Likewise, the consideration of uncertainty in a physical parameter within the analysis, such as the reaction rate constant in the Tennessee Eastman problem, was shown to dramatically increase the computational load when compared to the case in which there is no process parametric uncertainty in the analysis. In general, the integration of design and control within the analysis resulted in a plant that is more economically attractive than that specified by solely optimizing the controllers but leaving the design of the different units fixed. This result is particularly relevant for this research work since it justifies the need for conducting simultaneous process design and control of chemical processes. Although the application of the robust tools resulted in conservative designs, the method has been shown to be an efficient computational tool for simultaneous design and control of chemical plants
    • …
    corecore