1,203 research outputs found

    Advanced and novel modeling techniques for simulation, optimization and monitoring chemical engineering tasks with refinery and petrochemical unit applications

    Get PDF
    Engineers predict, optimize, and monitor processes to improve safety and profitability. Models automate these tasks and determine precise solutions. This research studies and applies advanced and novel modeling techniques to automate and aid engineering decision-making. Advancements in computational ability have improved modeling software’s ability to mimic industrial problems. Simulations are increasingly used to explore new operating regimes and design new processes. In this work, we present a methodology for creating structured mathematical models, useful tips to simplify models, and a novel repair method to improve convergence by populating quality initial conditions for the simulation’s solver. A crude oil refinery application is presented including simulation, simplification tips, and the repair strategy implementation. A crude oil scheduling problem is also presented which can be integrated with production unit models. Recently, stochastic global optimization (SGO) has shown to have success of finding global optima to complex nonlinear processes. When performing SGO on simulations, model convergence can become an issue. The computational load can be decreased by 1) simplifying the model and 2) finding a synergy between the model solver repair strategy and optimization routine by using the initial conditions formulated as points to perturb the neighborhood being searched. Here, a simplifying technique to merging the crude oil scheduling problem and the vertically integrated online refinery production optimization is demonstrated. To optimize the refinery production a stochastic global optimization technique is employed. Process monitoring has been vastly enhanced through a data-driven modeling technique Principle Component Analysis. As opposed to first-principle models, which make assumptions about the structure of the model describing the process, data-driven techniques make no assumptions about the underlying relationships. Data-driven techniques search for a projection that displays data into a space easier to analyze. Feature extraction techniques, commonly dimensionality reduction techniques, have been explored fervidly to better capture nonlinear relationships. These techniques can extend data-driven modeling’s process-monitoring use to nonlinear processes. Here, we employ a novel nonlinear process-monitoring scheme, which utilizes Self-Organizing Maps. The novel techniques and implementation methodology are applied and implemented to a publically studied Tennessee Eastman Process and an industrial polymerization unit

    SensorSCAN: Self-Supervised Learning and Deep Clustering for Fault Diagnosis in Chemical Processes

    Full text link
    Modern industrial facilities generate large volumes of raw sensor data during the production process. This data is used to monitor and control the processes and can be analyzed to detect and predict process abnormalities. Typically, the data has to be annotated by experts in order to be used in predictive modeling. However, manual annotation of large amounts of data can be difficult in industrial settings. In this paper, we propose SensorSCAN, a novel method for unsupervised fault detection and diagnosis, designed for industrial chemical process monitoring. We demonstrate our model's performance on two publicly available datasets of the Tennessee Eastman Process with various faults. The results show that our method significantly outperforms existing approaches (+0.2-0.3 TPR for a fixed FPR) and effectively detects most of the process faults without expert annotation. Moreover, we show that the model fine-tuned on a small fraction of labeled data nearly reaches the performance of a SOTA model trained on the full dataset. We also demonstrate that our method is suitable for real-world applications where the number of faults is not known in advance. The code is available at https://github.com/AIRI-Institute/sensorscan

    Integration of process design and control: A review

    Get PDF
    There is a large variety of methods in literature for process design and control, which can be classified into two main categories. The methods in the first category have a sequential approach in which, the control system is designed, only after the details of process design are decided. However, when process design is fixed, there is little room left for improving the control performance. Recognizing the interactions between process design and control, the methods in the second category integrate some control aspects into process design. With the aim of providing an exploration map and identifying the potential areas of further contributions, this paper presents a thematic review of the methods for integration of process design and control. The evolution paths of these methods are described and the advantages and disadvantages of each method are explained. The paper concludes with suggestions for future research activities

    Oversizing Analysis in Plant-wide Control Design for Industrial Processes

    Get PDF
    In this work, an alternative plant-wide control design approach based on oversizing analysis is presented. The overall strategy can be divided in two main sequential tasks: 1- defining the optimal decentralized control structure, and 2- setting the controller interaction degree and its implementation. Both problems represent combinatorial optimizations based on multi-objective functional costs and were solved efficiently by genetic algorithms. The first task defines the optimal selection of controlled and manipulated variables simultaneously, the input-output pairing, and the overall controller dimension in a sum of square deviations context. The second task analyzes the potential improvements by defining the controller interaction degree via the net load evaluation approach. In addition, some insights are given about the feasibility (implementation load) of these control structures for a decentralized or centralized framework. The well-known Tennessee Eastman (TE) process is selected here for sake of comparison with other multivariable control designs.Fil: Zumoffen, David Alejandro Ramon. Consejo Nacional de Investigaciones Científicas y Técnicas. Centro Científico Tecnológico Rosario. Centro Internacional Franco Argentino de Ciencias de la Información y Sistemas; Argentin

    Integration of design and control for large-scale applications: a back-off approach

    Get PDF
    Design and control are two distinct aspects of a process that are inherently related though these aspects are often treated independently. Performing a sequential design and control strategy may lead to poor control performance or overly conservative and thus expensive designs. Unsatisfactory designs stem from neglecting the connection of choices made at the process design stage that affects the process dynamics. Integration of design and control introduces the opportunity to establish a transparent link between steady-state economics and dynamic performance at the early stages of the process design that enables the identification of reliable and optimal designs while ensuring feasible operation of the process under internal and external disruptions. The dynamic nature of the current global market drives industries to push their manufacturing strategies to the limits to achieve a sustainable and optimal operation. Hence, the integration of design and control plays a crucial role in constructing a sustainable process since it increases the short and long-term profits of industrial processes. Simultaneous process design and control often results in challenging computationally intensive and complex problems, which can be formulated conceptually as dynamic optimization problems. The size and complexity of the conceptual integrated problem impose a limitation on the potential solution strategies that could be implemented on large-scale industrial systems. Thus far, the implementation of integration of design and methodologies on large-scale applications is still challenging and remains as an open question. The back-off approach is one of the proposed methodologies that relies on steady-state economics to initiate the search for optimal and dynamically feasible process design. The idea of the surrogate model is combined with the back-off approach in the current research as the key technique to propose a practical and systematic method for the integration of design and control for large-scale applications. The back-off approach featured with power series expansions (PSEs) is developed and extended to achieve multiple goals. The proposed back-off method focuses on searching for the optimal design and control parameters by solving a set of optimization problems using PSE functions. The idea is to search for the optimal direction in the optimization variables by solving a series of bounded PSE-based optimization problems. The approach is a sequential approximate optimization method in which the system is evaluated around the worst-case variability expected in process outputs. Hence, using PSE functions instead of the actual nonlinear dynamic process model at each iteration step reduces the computational effort. The method mostly traces the closest feasible and near-optimal solution to the initial steady-state condition considering the worst-case scenario. The term near-optimal refers to the potential deviations from the original locally optimum due to the approximation techniques considered in this work. A trust-region method has been developed in this research to tackle simultaneous design and control of large-scale processes under uncertainty. In the initial version of the back-off approach proposed in this research, the search space region in the PSE-based optimization problem was specified a priori. Selecting a constant search space for the PSE functions may undermine the convergence of the methodology since the predictions of the PSEs highly depend on the nominal conditions used to develop the corresponding PSE functions. Thus, an adaptive search space for individual PSE-optimization problems at every iteration step is proposed. The concept has been designed in a way that certifies the competence of the PSE functions at each iteration and adapts the search space of the optimization as the iteration proceeds in the algorithm. Metrics for estimating the residuals such as the mean of squared errors (MSE) are employed to quantify the accuracy of the PSE approximations. Search space regions identified by this method specify the boundaries of the decision variables for the PSE-based optimization problems. Finding a proper search region is a challenging task since the nonlinearity of the system at different nominal conditions may vary significantly. The procedure moves towards a descent direction and at the convergence point, it can be shown that it satisfies first-order KKT conditions. The proposed methodology has been tested on different case studies involving different features. Initially, an existent wastewater treatment plant is considered as a primary medium-scale case study in the early stages of the development of the methodology. The wastewater treatment plant is also used to investigate the potential benefits and capabilities of a stochastic version of the back-off methodology. Furthermore, the results of the proposed methodology are compared to the formal integration approach in a dynamic programming framework for the medium-scale case study. The Tennessee Eastman (TE) process is selected as a large-scale case study to explore the potentials of the proposed method. The results of the proposed trust-region methodology have been compared to previously reported results in the literature for this plant. The results indicate that the proposed methodology leads to more economically attractive and reliable designs while maintaining the dynamic operability of the system in the presence of disturbances and uncertainty. Therefore, the proposed methodology shows a significant accomplishment in locating dynamically feasible and near-optimal design and operating conditions thus making it attractive for the simultaneous design and control of large-scale and highly nonlinear plants under uncertainty

    Efficient Ranking-Based Methodologies in the Optimal Design of Large-Scale Chemical Processes under Uncertainty

    Get PDF
    Chemical process design is still an active area of research since it largely determines the optimal and safe operation of a new process under various conditions. The design process involves a series of steps that aims to identify the most economically attractive design typically using steady-state optimization. However, optimal steady-state designs may fail to comply with the process constraints when the system under analysis is subject to uncertainties in the inputs (e.g. the composition of a reactant in a feedstream) or in the system’s parameters (e.g. the activation energy in a chemical reaction). This has motivated the development of systematic methods that explicitly account for uncertainty in optimal process design. In this work, a new efficient approach for the optimal design under uncertainty is presented. The key idea is to approximate the process constraint functions and outputs using Power Series Expansions (PSE)-based functions. A ranking-based approach is adopted where the user can assign priorities or probabilities of satisfaction for the different process constraints and process outputs considered in the analysis. The methodology was tested on a reactor-heat exchanger system, the Tennessee Eastman plant, which is an industrial benchmark process, and a post-combustion CO2 capture plant, which is a large-scale chemical plant that has recently gained attention and significance due to its potential to mitigate CO2 emissions from fossil-fired power plants. The results show that the present method is computationally attractive since the optimal process design is accomplished in shorter computational times when compared to the stochastic programming approach, which is the standard method used to address this type of problems. Furthermore, it has been shown that process dynamics play an important role while searching for the optimal process design of a system under uncertainty. Therefore, a stochastic-based simultaneous design and control methodology for the optimal design of chemical processes under uncertainty that incorporates an advanced model-based scheme such as Model Predictive Control (MPC) is also presented in this work. The key idea is to determine the time-dependent variability of the system that will be accounted for in the process design using a stochastic-based worst-case variability index. A case study of an actual wastewater treatment industrial plant has been used to test the proposed methodology. The MPC-based simultaneous design and control approach provided more economical designs when compared to a decentralized multi-loop PI control strategy, thus showing that this method is a practical approach to address the integration of design and control while using advanced model-based control strategies

    Tennessee Engineer Fall 2013

    Get PDF
    corecore