4 research outputs found

    Sim-to-real transfer in reinforcement learning-based, non-steady-state control for chemical plants

    No full text
    We present a novel framework for controlling non-steady situations in chemical plants to address the behavioural gaps between the simulator for constructing the reinforcement learning-based controller and the real plant considered for deploying the framework. In the field of reinforcement learning, the performance deterioration problem owing to such gaps are referred to as simulation-to-reality gaps (Sim-to-Real gaps). These gaps are triggered by multiple factors, including modelling errors on the simulators, incorrect state identifications, and unpredicted disturbances on the real situations. We focus on these issues and divided the objective of performing optimal control under gapped situations into three tasks, namely, (1) identifying the model parameters and current state, (2) optimizing the operation procedures, and (3) letting the real situations close to the simulated and predicted situations by adjusting the control inputs. Each task is assigned to a reinforcement learning agent and trained individually. After the training, the agents are integrated and collaborate on the original objective. We present the evaluation of our method in an actual chemical distillation plant, which demonstrates that our system successfully narrows down the gaps due to the emulated disturbance of a weather change (heavy rain) as well as the modelling errors and achieves the desired states
    corecore