6 research outputs found

    Offset free data driven control: application to a process control trainer

    Get PDF
    This work presents a data driven control strategy able to track a set point without steady-state error. The control sequence is computed as an affine combination of past control signals, which belong to a set of trajectories stored in a process historian database. This affine combination is computed so that the variance of the tracking error is minimised. It is shown that offset free control, that is zero mean tracking error, is achieved under the assumption that the state is measurable, the underlying dynamics are linear and the trajectories of the database share the same error dynamics and are in turn offset free. The proposed strategy learns the underlying controller stored in the database while maintaining its offset free tracking capability in spite of differences in the reference, disturbances and operating conditions. No training phase is required and newly obtained process data can be easily taken into account. The proposed strategy, related to direct weight optimisation learning techniques, is tested on a process control trainer.MINECO-Spain and FEDER Funds project DPI2016-76493-C3-1-RUniversity of Seville(Spain) grant 2014/42

    Data based predictive control: Application to water distribution networks

    Get PDF
    In this thesis, the main goal is to propose novel data based predictive controllers to cope with complex industrial infrastructures such as water distribution networks. This sort of systems have several inputs and out- puts, complicate nonlinear dynamics, binary actuators and they are usually perturbed by disturbances and noise and require real-time control implemen- tation. The proposed controllers have to deal successfully with these issues while using the available information, such as past operation data of the process, or system properties as fading dynamics. To this end, the control strategies presented in this work follow a predic- tive control approach. The control action computed by the proposed data- driven strategies are obtained as the solution of an optimization problem that is similar in essence to those used in model predictive control (MPC) based on a cost function that determines the performance to be optimized. In the proposed approach however, the prediction model is substituted by an inference data based strategy, either to identify a model, an unknown control law or estimate the future cost of a given decision. As in MPC, the proposed strategies are based on a receding horizon implementation, which implies that the optimization problems considered have to be solved online. In order to obtain problems that can be solved e ciently, most of the strategies proposed in this thesis are based on direct weight optimization for ease of implementation and computational complexity reasons. Linear convex combination is a simple and strong tool in continuous domain and computational load associated with the constrained optimization problems generated by linear convex combination are relatively soft. This fact makes the proposed data based predictive approaches suitable to be used in real time applications. The proposed approaches selects the most adequate information (similar to the current situation according to output, state, input, disturbances,etc.), in particular, data which is close to the current state or situation of the system. Using local data can be interpreted as an implicit local linearisation of the system every time we solve the model-free data driven optimization problem. This implies that even though, model free data driven approaches presented in this thesis are based on linear theory, they can successfully deal with nonlinear systems because of the implicit information available in the database. Finally, a learning-based approach for robust predictive control design for multi-input multi-output (MIMO) linear systems is also presented, in which the effect of the estimation and measuring errors or the effect of unknown perturbations in large scale complex system is considered

    Deep deterministic policy gradient: applications in process control and integrated process design and control

    Get PDF
    In recent years, the urgent need to develop sustainable processes to fight the negative effects of climate change has gained global attention and has led to the transition into renewable energies. As renewable sources present a complex dynamic behavior, this has motivated a search of new ways to simulate and optimize processes more efficiently. One emerging area that has recently been explored is Reinforcement learning (RL), which has shown promising results for different chemical engineering applications. Although recent studies on RL applied to chemical engineering applications have been performed in different areas such as process design, scheduling, and dynamic optimization, there is a need to explore further these applications to determine their technical feasibility and potential implementation in the chemical and manufacturing sectors. An emerging area of opportunity to consider is biological systems, such as Anaerobic Digestion Systems (AD). These systems are not only able to reduce waste from wastewater, but they can also produce biogas, which is an attractive source of renewable energy. The aim of this work is to test the feasibility of a RL algorithm referred to as Deep Deterministic Policy Gradient (DDPG) to two typical areas of process operations in chemical engineering, i.e., process control and process design and control. Parametric uncertainty and disturbances are considered in both approaches (i.e., process control and integration of process and control design). The motivation in using this algorithm is due to its ability to consider stochastic features, which can be interpreted as plant-model mismatch, which is needed to represent realistic operations of processes. In the first part of this work, the DDPG algorithm is used to seek for open-loop control actions that optimize an AD system treating Tequila vinasses under the effects of parametric uncertainty and disturbances. To provide a further insight, two different AD configurations (i.e., a single-stage and a two-stage system) are considered and compared under different scenarios. The results showed that the proposed methodology was able to learn an optimal policy, i.e., the control actions to minimize the organic content of Tequila in the effluents while producing biogas. However, further improvements are necessary to implement this DDPG-based methodology for online large-scale applications, e.g., reduce the computational costs. The second part of this study focuses on the development of a methodology to address the integration of process design and control for AD systems. The objective is to optimize an economic function with the aim of finding an optimal design while taking into account the controllability of the process. Some key aspects of this methodology are the consideration of stochastic disturbances and the ability to combine time-dependent and time-independent actions in the DDPG. The same two different reactor configurations considered in the optimal control study were explored and compared in this approach. To account for constraints, a penalty function was considered in the formulation of the economic function. The results showed that there are different advantages and limitations for each AD system. The two-stage system required a larger investment in capital costs in exchange of higher amounts of biogas being produced from this design. On the other hand, the single-stage AD system required less investment in capital costs in exchange of producing less biogas and therefore lower profits than the two-stage system. Overall, the DDPG was able to learn new control paths and optimal designs simultaneously thus making it an attractive method to address the integrated design and control of chemical systems subject to stochastic disturbances and parametric uncertainty.
    corecore