6 research outputs found

    Goal-oriented sensitivity analysis for lattice kinetic Monte Carlo simulations

    Full text link
    In this paper we propose a new class of coupling methods for the sensitivity analysis of high dimensional stochastic systems and in particular for lattice Kinetic Monte Carlo. Sensitivity analysis for stochastic systems is typically based on approximating continuous derivatives with respect to model parameters by the mean value of samples from a finite difference scheme. Instead of using independent samples the proposed algorithm reduces the variance of the estimator by developing a strongly correlated-"coupled"- stochastic process for both the perturbed and unperturbed stochastic processes, defined in a common state space. The novelty of our construction is that the new coupled process depends on the targeted observables, e.g. coverage, Hamiltonian, spatial correlations, surface roughness, etc., hence we refer to the proposed method as em goal-oriented sensitivity analysis. In particular, the rates of the coupled Continuous Time Markov Chain are obtained as solutions to a goal-oriented optimization problem, depending on the observable of interest, by considering the minimization functional of the corresponding variance. We show that this functional can be used as a diagnostic tool for the design and evaluation of different classes of couplings. Furthermore the resulting KMC sensitivity algorithm has an easy implementation that is based on the Bortz-Kalos-Lebowitz algorithm's philosophy, where here events are divided in classes depending on level sets of the observable of interest. Finally, we demonstrate in several examples including adsorption, desorption and diffusion Kinetic Monte Carlo that for the same confidence interval and observable, the proposed goal-oriented algorithm can be two orders of magnitude faster than existing coupling algorithms for spatial KMC such as the Common Random Number approach

    Accelerated Sensitivity Analysis in High-Dimensional Stochastic Reaction Networks

    Full text link
    In this paper, a two-step strategy for parametric sensitivity analysis for such systems is proposed, exploiting advantages and synergies between two recently proposed sensitivity analysis methodologies for stochastic dynamics. The first method performs sensitivity analysis of the stochastic dynamics by means of the Fisher Information Matrix on the underlying distribution of the trajectories; the second method is a reduced-variance, finite-difference, gradient-type sensitivity approach relying on stochastic coupling techniques for variance reduction. Here we demonstrate that these two methods can be combined and deployed together by means of a new sensitivity bound which incorporates the variance of the quantity of interest as well as the Fisher Information Matrix estimated from the first method. The first step of the proposed strategy labels sensitivities using the bound and screens out the insensitive parameters in a controlled manner based also on the new sensitivity bound. In the second step of the proposed strategy, the finite-difference method is applied only for the sensitivity estimation of the (potentially) sensitive parameters that have not been screened out in the first step. Results on an epidermal growth factor network with fifty parameters and on a protein homeostasis with eighty parameters demonstrate that the proposed strategy is able to quickly discover and discard the insensitive parameters and in the remaining potentially sensitive parameters it accurately estimates the sensitivities. The new sensitivity strategy can be several times faster than current state-of-the-art approaches that test all parameters, especially in "sloppy" systems. In particular, the computational acceleration is quantified by the ratio between the total number of parameters over the number of the sensitive parameters

    Parametric Sensitivity Analysis for Biochemical Reaction Networks based on Pathwise Information Theory

    Full text link
    Stochastic modeling and simulation provide powerful predictive methods for the intrinsic understanding of fundamental mechanisms in complex biochemical networks. Typically, such mathematical models involve networks of coupled jump stochastic processes with a large number of parameters that need to be suitably calibrated against experimental data. In this direction, the parameter sensitivity analysis of reaction networks is an essential mathematical and computational tool, yielding information regarding the robustness and the identifiability of model parameters. However, existing sensitivity analysis approaches such as variants of the finite difference method can have an overwhelming computational cost in models with a high-dimensional parameter space. We develop a sensitivity analysis methodology suitable for complex stochastic reaction networks with a large number of parameters. The proposed approach is based on Information Theory methods and relies on the quantification of information loss due to parameter perturbations between time-series distributions. For this reason, we need to work on path-space, i.e., the set consisting of all stochastic trajectories, hence the proposed approach is referred to as "pathwise". The pathwise sensitivity analysis method is realized by employing the rigorously-derived Relative Entropy Rate (RER), which is directly computable from the propensity functions. A key aspect of the method is that an associated pathwise Fisher Information Matrix (FIM) is defined, which in turn constitutes a gradient-free approach to quantifying parameter sensitivities. The structure of the FIM turns out to be block-diagonal, revealing hidden parameter dependencies and sensitivities in reaction networks

    Uncertainty Analysis and Control of Multiscale Process Systems

    Get PDF
    Microelectronic market imposes tight requirements upon thin film properties, including specific growth rate, surface roughness and thickness of the film. In the thin film deposition process, the microscopic events determine the configuration of the thin film surface while manipulating variables at the macroscopic level, such as bulk precursor mole fraction and substrate temperature, are essential to product quality. Despite the extensive body of research on control and optimization in this process, there is still a significant discrepancy between the expected performance and the actual yield that can be accomplished employing existing methodologies. This gap is mainly related to the complexities associated with the multiscale nature of the thin film deposition process, lack of practical online in-situ sensors at the fine-scale level, and uncertainties in the mechanisms and parameters of the system. The main goal of this research is developing robust control and optimization strategies for this process while uncertainty analysis is performed using power series expansion (PSE). The deposition process is a batch process where the measurements are available at the end of the batch; accordingly, optimization and control approaches that do not need to access online fine-scale measurements are required. In this research, offline optimization is performed to obtain the optimal temperature profile that results in specific product quality characteristics in the presence of model-plant mismatch. To provide a computationally tractable optimization, the sensitivities in PSEs are numerically evaluated using reduced-order lattices in the KMC models. A comparison between bounded and distributional parametric uncertainties has illustrated that inaccurate assumption for uncertainty description can lead to economic losses in the process. To accelerate the sensitivity analysis of the process, an algorithm has been presented to determine the upper and lower bounds on the outputs through distributions of the microscopic events. In this approach, the sensitivities in the series expansions of events are analytically evaluated. Current multiscale models are not available in closed-form and are computationally prohibitive for online applications. Thus, closed-form models have been developed in this research to predict the control objectives efficiently for online control applications in the presence of model-plant mismatch. The robust performance is quantified by estimates of the distributions of the controlled variables employing PSEs. Since these models can efficiently predict the controlled outputs, they can either be used as an estimator for feedback control purposes in the lack of sensors, or as a basis to design a nonlinear model predictive control (NMPC) framework. Although the recently introduced optical in-situ sensors have motivated the development of feedback control in the thin film deposition process, their application is still limited in practice. Thus, a multivariable robust estimator has been developed to estimate the surface roughness and growth rate based on the substrate temperature and bulk precursor mole fraction. To ensure that the control objective is met in the presence of model-plant mismatch, the robust estimator is designed such that it predicts the upper bound on the process output. The estimator is coupled with traditional feedback controllers to provide a robust feedback control in the lack of online measurements. In addition, a robust NMPC application for the thin film deposition process was developed. The NMPC makes use of closed-from models, which has been identified offline to predict the controlled outputs at a predefined specific probability. The shrinking horizon NMPC minimizes the final roughness, while satisfying the constraints on the control actions and film thickness at the end of the deposition process. Since the identification is performed for a fixed confidence level, hard constraints are defined for thin film properties. To improve the robust performance of NMPC using soft constraints, a closed-form model has been developed to estimate the first and second- order statistical moments of the thin film properties under uncertainty in the multiscale model parameters. Employing this model, the surface roughness and film thickness can be estimated at a desired probability limit during the deposition. Thus, an NMPC framework is devised that successfully minimizes the surface roughness at the end of the batch, while the film thickness meets a minimum specification at a desired probability. Therefore, the methods developed in this research enable accurate online control of the key properties of a multiscale system in the presence of model-plant mismatch

    Uncertainty Analysis and Robust Optimization of a Single Pore in a Heterogeneous Catalytic Flow Reactor System

    Get PDF
    Catalytic systems are crucial to a wide number of chemical production processes, and as a result there is significant demand to develop novel catalyst materials and to optimize existing catalytic reactor systems. These optimization and design studies are most readily implemented using model-based approaches, which require less time and fewer resources than the alternative experimental-based approaches. The behaviour of a catalytic reactor system can be captured using multiscale modeling approaches that combine continuum transport equations with kinetic modeling approaches such as kinetic Monte Carlo (kMC) or the mean-field (MF) approximation in order to model the relevant reactor phenomena on the length and time scales on which they occur. These multiscale modeling approaches are able to accurately capture the reactor behaviour and can be readily implemented to perform robust optimization and process improvement studies on catalytic reaction systems. The problem with multiscale-based optimization of catalytic reactor systems, however, is that this is still an emerging field and there still remain a number of challenges that hinder these methods. One such challenge involves the computational cost. Multiscale modeling approaches can be computationally-intensive, which limit their application to model-based optimization processes. These computational burdens typically stem from the use of fine-scale models that lack closed-form expressions, such as kMC. A second common challenge involves model-plant mismatch, which can hinder the accuracy of the model. This mismatch stems from uncertainty in the reaction pathways and from difficulties in obtaining the values of the system parameters from experimental results. In addition, the uncertainty in catalytic flow reactor systems can vary in space due to kinetic events not taken into consideration by the multiscale model, such as non-uniform catalyst deactivation due to poisoning and fouling mechanisms. Failure to adequately account for model-plant mismatch can result in substantial deviations from the predicted catalytic reactor performance and significant losses in reactor efficiency. Furthermore, uncertainty propagation techniques can be computationally intensive and can further increase the computational demands of the multiscale models. Based on the above challenges, the objective of this research is to develop and implement efficient strategies that study the effects of parametric uncertainty in key parameters on the performance of a multiscale single-pore catalytic reactor system and subsequently to implement them to perform robust and dynamic optimization on the reactor system subject to uncertainty. To this end, low-order series expansions such as Polynomial Chaos Expansion (PCE) and Power Series Expansion (PSE) were implemented in order to efficiently propagate parametric uncertainty through the multiscale reactor model. These uncertainty propagation techniques were used to perform extensive uncertainty analyses on the catalytic reactor system in order to observe the impact of parametric uncertainty in various key system parameters on the catalyst reactor performance. Subsequently, these tools were implemented into robust optimization formulations that sought to maximize the reactor productivity and minimize the variability in the reactor performance due to uncertainty. The results highlight the significant effect of parametric uncertainty on the reactor performance and illustrate how they can be accommodated for when performing robust optimization. In order to assess the impact of spatially-varying uncertainty due to catalyst deactivation on the catalytic reactor system, the uncertainty propagation techniques were applied to evaluate and compare the effects of spatially-constant and spatially-varying uncertainty distributions. To accommodate for the spatially-varying uncertainty, unique uncertainty descriptions were applied to each uncertain parameter at discretized points across the reactor length. The uncertainty comparison was furthermore extended through application to robust optimization. To reduce the computational cost, statistical data-driven models (DDMs) were identified to approximate the key statistical parameters (mean, variance, and probabilistic bounds) of the reactor output variability for each uncertainty distribution. The DDMs were incorporated into robust optimization formulations that aimed to maximize the reactor productivity subject to uncertainty and minimize the uncertainty-induced output variability. The results demonstrate the impact of spatially-varying parametric uncertainty on the catalytic reactor performance. They also highlight the importance of its inclusion to adequately account for phenomena such as catalyst fouling in robust optimization and process improvement studies. The dynamic behaviour of the catalytic reactor system was similarly assessed within this work to evaluate the effects of uncertainty on the reactor performance as it evolves in time and space. For this study, uncertainty analysis was performed on a transient multiscale catalytic reactor model subject to changes in the system temperature. These results were used to formulate robust dynamic optimization studies to maximize the transient catalytic reactor behaviour. These studies aimed to determine the optimal temperature trajectories that maximize the reactor’s performance under uncertainty. Dynamic optimization was also implemented to identify the optimal design and operating policies that allow the reactor, under spatially-varying uncertainty, to meet targeted performance specifications within a level of confidence. These studies illustrate the benefits of performing dynamic optimization to improve performance for multiscale process systems under uncertainty
    corecore