904 research outputs found

    Iterative learning control of crystallisation systems

    Get PDF
    Under the increasing pressure of issues like reducing the time to market, managing lower production costs, and improving the flexibility of operation, batch process industries thrive towards the production of high value added commodity, i.e. specialty chemicals, pharmaceuticals, agricultural, and biotechnology enabled products. For better design, consistent operation and improved control of batch chemical processes one cannot ignore the sensing and computational blessings provided by modern sensors, computers, algorithms, and software. In addition, there is a growing demand for modelling and control tools based on process operating data. This study is focused on developing process operation data-based iterative learning control (ILC) strategies for batch processes, more specifically for batch crystallisation systems. In order to proceed, the research took a step backward to explore the existing control strategies, fundamentals, mechanisms, and various process analytical technology (PAT) tools used in batch crystallisation control. From the basics of the background study, an operating data-driven ILC approach was developed to improve the product quality from batch-to-batch. The concept of ILC is to exploit the repetitive nature of batch processes to automate recipe updating using process knowledge obtained from previous runs. The methodology stated here was based on the linear time varying (LTV) perturbation model in an ILC framework to provide a convergent batch-to-batch improvement of the process performance indicator. In an attempt to create uniqueness in the research, a novel hierarchical ILC (HILC) scheme was proposed for the systematic design of the supersaturation control (SSC) of a seeded batch cooling crystalliser. This model free control approach is implemented in a hierarchical structure by assigning data-driven supersaturation controller on the upper level and a simple temperature controller in the lower level. In order to familiarise with other data based control of crystallisation processes, the study rehearsed the existing direct nucleation control (DNC) approach. However, this part was more committed to perform a detailed strategic investigation of different possible structures of DNC and to compare the results with that of a first principle model based optimisation for the very first time. The DNC results in fact outperformed the model based optimisation approach and established an ultimate guideline to select the preferable DNC structure. Batch chemical processes are distributed as well as nonlinear in nature which need to be operated over a wide range of operating conditions and often near the boundary of the admissible region. As the linear lumped model predictive controllers (MPCs) often subject to severe performance limitations, there is a growing demand of simple data driven nonlinear control strategy to control batch crystallisers that will consider the spatio-temporal aspects. In this study, an operating data-driven polynomial chaos expansion (PCE) based nonlinear surrogate modelling and optimisation strategy was presented for batch crystallisation processes. Model validation and optimisation results confirmed this approach as a promise to nonlinear control. The evaluations of the proposed data based methodologies were carried out by simulation case studies, laboratory experiments and industrial pilot plant experiments. For all the simulation case studies a detailed mathematical models covering reaction kinetics and heat mass balances were developed for a batch cooling crystallisation system of Paracetamol in water. Based on these models, rigorous simulation programs were developed in MATLAB®, which was then treated as the real batch cooling crystallisation system. The laboratory experimental works were carried out using a lab scale system of Paracetamol and iso-Propyl alcohol (IPA). All the experimental works including the qualitative and quantitative monitoring of the crystallisation experiments and products demonstrated an inclusive application of various in situ process analytical technology (PAT) tools, such as focused beam reflectance measurement (FBRM), UV/Vis spectroscopy and particle vision measurement (PVM) as well. The industrial pilot scale study was carried out in GlaxoSmithKline Bangladesh Limited, Bangladesh, and the system of experiments was Paracetamol and other powdered excipients used to make paracetamol tablets. The methodologies presented in this thesis provide a comprehensive framework for data-based dynamic optimisation and control of crystallisation processes. All the simulation and experimental evaluations of the proposed approaches emphasised the potential of the data-driven techniques to provide considerable advances in the current state-of-the-art in crystallisation control

    UQ and AI: data fusion, inverse identification, and multiscale uncertainty propagation in aerospace components

    Get PDF
    A key requirement for engineering designs is that they offer good performance across a range of uncertain conditions while exhibiting an admissibly low probability of failure. In order to design components that offer good performance across a range of uncertain conditions, it is necessary to take account of the effect of the uncertainties associated with a candidate design. Uncertainty Quantification (UQ) methods are statistical methods that may be used to quantify the effect of the uncertainties inherent in a system on its performance. This thesis expands the envelope of UQ methods for the design of aerospace components, supporting the integration of UQ methods in product development by addressing four industrial challenges. Firstly, a method for propagating uncertainty through computational models in a hierachy of scales is described that is based on probabilistic equivalence and Non-Intrusive Polynomial Chaos (NIPC). This problem is relevant to the design of aerospace components as the computational models used to evaluate candidate designs are typically multiscale. This method was then extended to develop a formulation for inverse identification, where the probability distributions for the material properties of a coupon are deduced from measurements of its response. We demonstrate how probabilistic equivalence and the Maximum Entropy Principle (MEP) may be used to leverage data from simulations with scarce experimental data- with the intention of making this stage of product design less expensive and time consuming. The third contribution of this thesis is to develop two novel meta-modelling strategies to promote the wider exploration of the design space during the conceptual design phase. Design Space Exploration (DSE) in this phase is crucial as decisions made at the early, conceptual stages of an aircraft design can restrict the range of alternative designs available at later stages in the design process, despite limited quantitative knowledge of the interaction between requirements being available at this stage. A histogram interpolation algorithm is presented that allows the designer to interactively explore the design space with a model-free formulation, while a meta-model based on Knowledge Based Neural Networks (KBaNNs) is proposed in which the outputs of a high-level, inexpensive computer code are informed by the outputs of a neural network, in this way addressing the criticism of neural networks that they are purely data-driven and operate as black boxes. The final challenge addressed by this thesis is how to iteratively improve a meta-model by expanding the dataset used to train it. Given the reliance of UQ methods on meta-models this is an important challenge. This thesis proposes an adaptive learning algorithm for Support Vector Machine (SVM) metamodels, which are used to approximate an unknown function. In particular, we apply the adaptive learning algorithm to test cases in reliability analysis.Open Acces

    On Novel Approaches to Model-Based Structural Health Monitoring

    Get PDF
    Structural health monitoring (SHM) strategies have classically fallen into two main categories of approach: model-driven and data-driven methods. The former utilises physics-based models and inverse techniques as a method for inferring the health state of a structure from changes to updated parameters; hence defined as inverse model-driven approaches. The other frames SHM within a statistical pattern recognition paradigm. These methods require no physical modelling, instead inferring relationships between data and health states directly. Although successes with both approaches have been made, they both suffer from significant drawbacks, namely parameter estimation and interpretation difficulties within the inverse model-driven framework, and a lack of available full-system damage state data for data-driven techniques. Consequently, this thesis seeks to outline and develop a framework for an alternative category of approach; forward model-driven SHM. This class of strategies utilise calibrated physics-based models, in a forward manner, to generate health state data (i.e. the undamaged condition and damage states of interest) for training machine learning or pattern recognition technologies. As a result the framework seeks to provide potential solutions to these issues by removing the need for making health decisions from updated parameters and providing a mechanism for obtaining health state data. In light of this objective, a framework for forward model-driven SHM is established, highlighting key challenges and technologies that are required for realising this category of approach. The framework is constructed from two main components: generating physics-based models that accurately predict outputs under various damage scenarios, and machine learning methods used to infer decision bounds. This thesis deals with the former, developing technologies and strategies for producing statistically representative predictions from physics-based models. Specifically this work seeks to define validation within this context and propose a validation strategy, develop technologies that infer uncertainties from various sources, including model discrepancy, and offer a solution to the issue of validating full-system predictions when data is not available at this level. The first section defines validation within a forward model-driven context, offering a strategy of hypothesis testing, statistical distance metrics, visualisation tools, such as the witness function, and deterministic metrics. The statistical distances field is shown to provide a wealth of potential validation metrics that consider whole probability distributions. Additionally, existing validation metrics can be categorised within this fields terminology, providing greater insight. In the second part of this study emulator technologies, specifically Gaussian Process (GP) methods, are discussed. Practical implementation considerations are examined, including the establishment of validation and diagnostic techniques. Various GP extensions are outlined, with particular focus on technologies for dealing with large data sets and their applicability as emulators. Utilising these technologies two techniques for calibrating models, whilst accounting for and inferring model discrepancies, are demonstrated: Bayesian Calibration and Bias Correction (BCBC) and Bayesian History Matching (BHM). Both methods were applied to representative building structures in order to demonstrate their effectiveness within a forward model-driven SHM strategy. Sequential design heuristics were developed for BHM along with an importance sampling based technique for inferring the functional model discrepancy uncertainties. The third body of work proposes a multi-level uncertainty integration strategy by developing a subfunction discrepancy approach. This technique seeks to construct a methodology for producing valid full-system predictions through a combination of validated sub-system models where uncertainties and model discrepancy have been quantified. This procedure is demonstrated on a numerical shear structure where it is shown to be effective. Finally, conclusions about the aforementioned technologies are provided. In addition, a review of the future directions for forward model-driven SHM are outlined with the hope that this category receives wider investigation within the SHM community

    Approximation methodologies for explicit model predictive control of complex systems

    No full text
    This thesis concerns the development of complexity reduction methodologies for the application of multi-parametric/explicit model predictive (mp-MPC) control to complex high fidelity models. The main advantage of mp-MPC is the offline relocation of the optimization task and the associated computational expense through the use of multi-parametric programming. This allows for the application of MPC to fast sampling systems or systems for which it is not possible to perform online optimization due to cycle time requirements. The application of mp-MPC to complex nonlinear systems is of critical importance and is the subject of the thesis. The first part is concerned with the adaptation and development of model order reduction (MOR) techniques for application in combination to mp-MPC algorithms. This first part includes the mp-MPC oriented use of existing MOR techniques as well as the development of new ones. The use of MOR for multi-parametric moving horizon estimation is also investigated. The second part of the thesis introduces a framework for the ‘equation free’ surrogate-model based design of explicit controllers as a possible alternative to multi-parametric based methods. The methodology relies upon the use of advanced data-classification approaches and surrogate modelling techniques, and is illustrated with different numerical examples.Open Acces

    Bayesian inference with optimal maps

    Get PDF
    We present a new approach to Bayesian inference that entirely avoids Markov chain simulation, by constructing a map that pushes forward the prior measure to the posterior measure. Existence and uniqueness of a suitable measure-preserving map is established by formulating the problem in the context of optimal transport theory. We discuss various means of explicitly parameterizing the map and computing it efficiently through solution of an optimization problem, exploiting gradient information from the forward model when possible. The resulting algorithm overcomes many of the computational bottlenecks associated with Markov chain Monte Carlo. Advantages of a map-based representation of the posterior include analytical expressions for posterior moments and the ability to generate arbitrary numbers of independent posterior samples without additional likelihood evaluations or forward solves. The optimization approach also provides clear convergence criteria for posterior approximation and facilitates model selection through automatic evaluation of the marginal likelihood. We demonstrate the accuracy and efficiency of the approach on nonlinear inverse problems of varying dimension, involving the inference of parameters appearing in ordinary and partial differential equations.United States. Dept. of Energy. Office of Advanced Scientific Computing Research (Grant DE-SC0002517)United States. Dept. of Energy. Office of Advanced Scientific Computing Research (Grant DE-SC0003908

    Scalable Emulation of Sign-Problem−-Free Hamiltonians with Room Temperature p-bits

    Full text link
    The growing field of quantum computing is based on the concept of a q-bit which is a delicate superposition of 0 and 1, requiring cryogenic temperatures for its physical realization along with challenging coherent coupling techniques for entangling them. By contrast, a probabilistic bit or a p-bit is a robust classical entity that fluctuates between 0 and 1, and can be implemented at room temperature using present-day technology. Here, we show that a probabilistic coprocessor built out of room temperature p-bits can be used to accelerate simulations of a special class of quantum many-body systems that are sign-problem−-free or stoquastic, leveraging the well-known Suzuki-Trotter decomposition that maps a dd-dimensional quantum many body Hamiltonian to a dd+1-dimensional classical Hamiltonian. This mapping allows an efficient emulation of a quantum system by classical computers and is commonly used in software to perform Quantum Monte Carlo (QMC) algorithms. By contrast, we show that a compact, embedded MTJ-based coprocessor can serve as a highly efficient hardware-accelerator for such QMC algorithms providing several orders of magnitude improvement in speed compared to optimized CPU implementations. Using realistic device-level SPICE simulations we demonstrate that the correct quantum correlations can be obtained using a classical p-circuit built with existing technology and operating at room temperature. The proposed coprocessor can serve as a tool to study stoquastic quantum many-body systems, overcoming challenges associated with physical quantum annealers.Comment: Fixed minor typos and expanded Appendi

    Robust Algorithms for Optimization of Chemical Processes in the Presence of Model-Plant Mismatch

    Get PDF
    Process models are always associated with uncertainty, due to either inaccurate model structure or inaccurate identification. If left unaccounted for, these uncertainties can significantly affect the model-based decision-making. This thesis addresses the problem of model-based optimization in the presence of uncertainties, especially due to model structure error. The optimal solution from standard optimization techniques is often associated with a certain degree of uncertainty and if the model-plant mismatch is very significant, this solution may have a significant bias with respect to the actual process optimum. Accordingly, in this thesis, we developed new strategies to reduce (1) the variability in the optimal solution and (2) the bias between the predicted and the true process optima. Robust optimization is a well-established methodology where the variability in optimization objective is considered explicitly in the cost function, leading to a solution that is robust to model uncertainties. However, the reported robust formulations have few limitations especially in the context of nonlinear models. The standard technique to quantify the effect of model uncertainties is based on the linearization of underlying model that may not be valid if the noise in measurements is quite high. To address this limitation, uncertainty descriptions based on the Bayes’ Theorem are implemented in this work. Since for nonlinear models the resulting Bayesian uncertainty may have a non-standard form with no analytical solution, the propagation of this uncertainty onto the optimum may become computationally challenging using conventional Monte Carlo techniques. To this end, an approach based on Polynomial Chaos expansions is developed. It is shown in a simulated case study that this approach resulted in drastic reductions in the computational time when compared to a standard Monte Carlo sampling technique. The key advantage of PC expansions is that they provide analytical expressions for statistical moments even if the uncertainty in variables is non-standard. These expansions were also used to speed up the calculation of likelihood function within the Bayesian framework. Here, a methodology based on Multi-Resolution analysis is proposed to formulate the PC based approximated model with higher accuracy over the parameter space that is most likely based on the given measurements. For the second objective, i.e. reducing the bias between the predicted and true process optima, an iterative optimization algorithm is developed which progressively corrects the model for structural error as the algorithm proceeds towards the true process optimum. The standard technique is to calibrate the model at some initial operating conditions and, then, use this model to search for an optimal solution. Since the identification and optimization objectives are solved independently, when there is a mismatch between the process and the model, the parameter estimates cannot satisfy these two objectives simultaneously. To this end, in the proposed methodology, corrections are added to the model in such a way that the updated parameter estimates reduce the conflict between the identification and optimization objectives. Unlike the standard estimation technique that minimizes only the prediction error at a given set of operating conditions, the proposed algorithm also includes the differences between the predicted and measured gradients of the optimization objective and/or constraints in the estimation. In the initial version of the algorithm, the proposed correction is based on the linearization of model outputs. Then, in the second part, the correction is extended by using a quadratic approximation of the model, which, for the given case study, resulted in much faster convergence as compared to the earlier version. Finally, the methodologies mentioned above were combined to formulate a robust iterative optimization strategy that converges to the true process optimum with minimum variability in the search path. One of the major findings of this thesis is that the robust optimal solutions based on the Bayesian parametric uncertainty are much less conservative than their counterparts based on normally distributed parameters

    Design and optimization under uncertainty of Energy Systems

    Get PDF
    In many engineering design and optimisation problems, the presence of uncertainty in data and parameters is a central and critical issue. The analysis and design of advanced complex energy systems is generally performed starting from a single operating condition and assuming a series of design and operating parameters as fixed values. However, many of the variables on which the design is based are subject to uncertainty because they are not determinable with an adequate precision and they can affect both performance and cost. Uncertainties stem naturally from our limitations in measurements, predictions and manufacturing, and we can say that any system used in engineering is subject to some degree of uncertainty. Different fields of engineering use different ways to describe this uncertainty and adopt a variety of techniques to approach the problem. The past decade has seen a significant growth of research and development in uncertainty quantification methods to analyse the propagation of uncertain inputs through the systems. One of the main challenges in this field are identifying sources of uncertainty that potentially affect the outcomes and the efficiency in propagating these uncertainties from the sources to the quantities of interest, especially when there are many sources of uncertainties. Hence, the level of rigor in uncertainty analysis depends on the quality of uncertainty quantification method. The main obstacle of this analysis is often the computational effort, because the representative model is typically highly non-linear and complex. Therefore, it is necessary to have a robust tool that can perform the uncertainty propagation through a non-intrusive approach with as few evaluations as possible. The primary goal of this work is to show a robust method for uncertainty quantification applied to energy systems. The first step in this direction was made doing a work on the analysis of uncertainties on a recuperator for micro gas turbines, making use of the Monte Carlo and Response Sensitivity Analysis methodologies to perform this study. However, when considering more complex energy systems, one of the main weaknesses of uncertainty quantification methods arises: the extremely high computational effort needed. For this reason, the application of a so-called metamodel was found necessary and useful. This approach was applied to perform a complete analysis under uncertainty of a solid oxide fuel cell hybrid system, starting from the evaluation of the impact of several uncertainties on the system up to a robust design including a multi-objective optimization. The response surfaces have allowed the authors to consider the uncertainties in the system when performing an acceptable number of simulations. These response were then used to perform a Monte Carlo simulation to evaluate the impact of the uncertainties on the monitored outputs, giving an insight on the spread of the resulting probability density functions and so on the outputs which should be considered more carefully during the design phase. Finally, the analysis of a complex combined cycle with a flue gas condesing heat pump subject to market uncertainties was performed. To consider the uncertainties in the electrical price, which would impact directly the revenues of the system, a statistical study on the behaviour of such price along the years was performed. From the data obtained it was possible to create a probability density function for each hour of the day which would represent its behaviour, and then those distributions were used to analyze the variability of the system in terms of revenues and emissions

    Bayesian design of experiments for complex chemical systems

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Chemical Engineering, 2011.Cataloged from PDF version of thesis.Includes bibliographical references (p. 317-322).Engineering design work relies on the ability to predict system performance. A great deal of effort is spent producing models that incorporate knowledge of the underlying physics and chemistry in order to understand the relationship between system inputs and responses. Although models can provide great insight into the behavior of the system, actual design decisions cannot be made based on predictions alone. In order to make properly informed decisions, it is critical to understand uncertainty. Otherwise, there cannot be a quantitative assessment of which predictions are reliable and which inputs are most significant. To address this issue, a new design method is required that can quantify the complex sources of uncertainty that influence model predictions and the corresponding engineering decisions. Design of experiments is traditionally defined as a structured procedure to gather information. This thesis reframes design of experiments as a problem of quantifying and managing uncertainties. The process of designing experimental studies is treated as a statistical decision problem using Bayesian methods. This perspective follows from the realization that the primary role of engineering experiments is not only to gain knowledge but to gather the necessary information to make future design decisions. To do this, experiments must be designed to reduce the uncertainties relevant to the future decision. The necessary components are: a model of the system, a model of the observations taken from the system, and an understanding of the sources of uncertainty that impact the system. While the Bayesian approach has previously been attempted in various fields including Chemical Engineering the true benefit has been obscured by the use of linear system models, simplified descriptions of uncertainty, and the lack of emphasis on the decision theory framework. With the recent development of techniques for Bayesian statistics and uncertainty quantification, including Markov Chain Monte Carlo, Polynomial Chaos Expansions, and a prior sampling formulation for computing utility functions, such simplifications are no longer necessary. In this work, these methods have been integrated into the decision theory framework to allow the application of Bayesian Designs to more complex systems. The benefits of the Bayesian approach to design of experiments are demonstrated on three systems: an air mill classifier, a network of chemical reactions, and a process simulation based on unit operations. These case studies quantify the impact of rigorous modeling of uncertainty in terms of reduced number of experiments as compared to the currently used Classical Design methods. Fewer experiments translate to less time and resources spent, while reducing the important uncertainties relevant to decision makers. In an industrial setting, this represents real world benefits for large research projects in reducing development costs and time-to-market. Besides identifying the best experiments, the Bayesian approach also allows a prediction of the value of experimental data which is crucial in the decision making process. Finally, this work demonstrates the flexibility of the decision theory framework and the feasibility of Bayesian Design of Experiments for the complex process models commonly found in the field of Chemical Engineering.by Kenneth T. Hu.Ph.D

    Fast numerical methods for robust nonlinear optimal control under uncertainty

    Get PDF
    This thesis treats different aspects of nonlinear optimal control problems under uncertainty in which the uncertain parameters are modeled probabilistically. We apply the polynomial chaos expansion, a well known method for uncertainty quantification, to obtain deterministic surrogate optimal control problems. Their size and complexity pose a computational challenge for traditional optimal control methods. For nonlinear optimal control, this difficulty is increased because a high polynomial expansion order is necessary to derive meaningful statements about the nonlinear and asymmetric uncertainty propagation. To this end, we develop an adaptive optimization strategy which refines the approximation quality separately for each state variable using suitable error estimates. The benefits are twofold: we obtain additional means for solution verification and reduce the computational effort for finding an approximate solution with increased precision. The algorithmic contribution is complemented by a convergence proof showing that the solutions of the optimal control problem after application of the polynomial chaos method approach the correct solution for increasing expansion orders. To obtain a further speed-up in solution time, we develop a structure-exploiting algorithm for the fast derivative generation. The algorithm makes use of the special structure induced by the spectral projection to reuse model derivatives and exploit sparsity information leading to a fast automatic sensitivity generation. This greatly reduces the computational effort of Newton-type methods for the solution of the resulting high-dimensional surrogate problem. Another challenging topic of this thesis are optimal control problems with chance constraints, which form a probabilistic robustification of the solution that is neither too conservative nor underestimates the risk. We develop an efficient method based on the polynomial chaos expansion to compute nonlinear propagations of the reachable sets of all uncertain states and show how it can be used to approximate individual and joint chance constraints. The strength of the obtained estimator in guaranteeing a satisfaction level is supported by providing an a-priori error estimate with exponential convergence in case of sufficiently smooth solutions. All methods developed in this thesis are readily implemented in state-of-the-art direct methods to optimal control. Their performance and suitability for optimal control problems is evaluated in a numerical case study on two nonlinear real-world problems using Monte Carlo simulations to illustrate the effects of the propagated uncertainty on the optimal control solution. As an industrial application, we solve a challenging optimal control problem modeling an adsorption refrigeration system under uncertainty
    • …
    corecore