40 research outputs found
ARRTOC: Adversarially Robust Real-Time Optimization and Control
Real-Time Optimization (RTO) plays a crucial role in the process operation
hierarchy by determining optimal set-points for the lower-level controllers.
However, these optimal set-points can become inoperable due to implementation
errors, such as disturbances and noise, at the control layers. To address this
challenge, in this paper, we present the Adversarially Robust Real-Time
Optimization and Control (ARRTOC) algorithm. ARRTOC draws inspiration from
adversarial machine learning, offering an online constrained Adversarially
Robust Optimization (ARO) solution applied to the RTO layer. This approach
identifies set-points that are both optimal and inherently robust to control
layer perturbations. By integrating controller design with RTO, ARRTOC enhances
overall system performance and robustness. Importantly, ARRTOC maintains
versatility through a loose coupling between the RTO and control layers,
ensuring compatibility with various controller architectures and RTO
algorithms. To validate our claims, we present three case studies: an
illustrative example, a bioreactor case study, and a multi-loop evaporator
process. Our results demonstrate the effectiveness of ARRTOC in achieving the
delicate balance between optimality and operability in RTO and control
Constrained Model-Free Reinforcement Learning for Process Optimization
Reinforcement learning (RL) is a control approach that can handle nonlinear
stochastic optimal control problems. However, despite the promise exhibited, RL
has yet to see marked translation to industrial practice primarily due to its
inability to satisfy state constraints. In this work we aim to address this
challenge. We propose an 'oracle'-assisted constrained Q-learning algorithm
that guarantees the satisfaction of joint chance constraints with a high
probability, which is crucial for safety critical tasks. To achieve this,
constraint tightening (backoffs) are introduced and adjusted using Broyden's
method, hence making them self-tuned. This results in a general methodology
that can be imbued into approximate dynamic programming-based algorithms to
ensure constraint satisfaction with high probability. Finally, we present case
studies that analyze the performance of the proposed approach and compare this
algorithm with model predictive control (MPC). The favorable performance of
this algorithm signifies a step toward the incorporation of RL into real world
optimization and control of engineering systems, where constraints are
essential in ensuring safety
Multi-Fidelity Data-Driven Design and Analysis of Reactor and Tube Simulations
The development of new manufacturing techniques such as 3D printing have
enabled the creation of previously infeasible chemical reactor designs.
Systematically optimizing the highly parameterized geometries involved in these
new classes of reactor is vital to ensure enhanced mixing characteristics and
feasible manufacturability. Here we present a framework to rapidly solve this
nonlinear, computationally expensive, and derivative-free problem, enabling the
fast prototype of novel reactor parameterizations. We take advantage of
Gaussian processes to adaptively learn a multi-fidelity model of reactor
simulations across a number of different continuous mesh fidelities. The search
space of reactor geometries is explored through an amalgam of different,
potentially lower, fidelity simulations which are chosen for evaluation based
on weighted acquisition function, trading off information gain with cost of
simulation. Within our framework we derive a novel criteria for monitoring the
progress and dictating the termination of multi-fidelity Bayesian optimization,
ensuring a high fidelity solution is returned before experimental budget is
exhausted. The class of reactor we investigate are helical-tube reactors under
pulsed-flow conditions, which have demonstrated outstanding mixing
characteristics, have the potential to be highly parameterized, and are easily
manufactured using 3D printing. To validate our results, we 3D print and
experimentally validate the optimal reactor geometry, confirming its mixing
performance. In doing so we demonstrate our design framework to be extensible
to a broad variety of expensive simulation-based optimization problems,
supporting the design of the next generation of highly parameterized chemical
reactors.Comment: 22 Pages with Appendi
Machine Learning-Assisted Discovery of Novel Reactor Designs via CFD-Coupled Multi-fidelity Bayesian Optimisation
Additive manufacturing has enabled the production of more advanced reactor
geometries, resulting in the potential for significantly larger and more
complex design spaces. Identifying and optimising promising configurations
within broader design spaces presents a significant challenge for existing
human-centric design approaches. As such, existing parameterisations of
coiled-tube reactor geometries are low-dimensional with expensive optimisation
limiting more complex solutions. Given algorithmic improvements and the onset
of additive manufacturing, we propose two novel coiled-tube parameterisations
enabling the variation of cross-section and coil path, resulting in a series of
high dimensional, complex optimisation problems. To ensure tractable, non-local
optimisation where gradients are not available, we apply multi-fidelity
Bayesian optimisation. Our approach characterises multiple continuous
fidelities and is coupled with parameterised meshing and simulation, enabling
lower quality, but faster simulations to be exploited throughout optimisation.
Through maximising the plug-flow performance, we identify key characteristics
of optimal reactor designs, and extrapolate these to produce two novel
geometries that we 3D print and experimentally validate. By demonstrating the
design, optimisation, and manufacture of highly parameterised reactors, we seek
to establish a framework for the next-generation of reactors, demonstrating
that intelligent design coupled with new manufacturing processes can
significantly improve the performance and sustainability of future chemical
processes.Comment: 11 pages, 8 figure
An Analysis of Multi-Agent Reinforcement Learning for Decentralized Inventory Control Systems
Most solutions to the inventory management problem assume a centralization of
information that is incompatible with organisational constraints in real supply
chain networks. The inventory management problem is a well-known planning
problem in operations research, concerned with finding the optimal re-order
policy for nodes in a supply chain. While many centralized solutions to the
problem exist, they are not applicable to real-world supply chains made up of
independent entities. The problem can however be naturally decomposed into
sub-problems, each associated with an independent entity, turning it into a
multi-agent system. Therefore, a decentralized data-driven solution to
inventory management problems using multi-agent reinforcement learning is
proposed where each entity is controlled by an agent. Three multi-agent
variations of the proximal policy optimization algorithm are investigated
through simulations of different supply chain networks and levels of
uncertainty. The centralized training decentralized execution framework is
deployed, which relies on offline centralization during simulation-based policy
identification, but enables decentralization when the policies are deployed
online to the real system. Results show that using multi-agent proximal policy
optimization with a centralized critic leads to performance very close to that
of a centralized data-driven solution and outperforms a distributed model-based
solution in most cases while respecting the information constraints of the
system
Deep learning based surrogate modeling and optimization for Microalgal biofuel production and photobioreactor design
Identifying optimal photobioreactor configurations and process operating conditions is
critical to industrialize microalgae-derived biorenewables. Traditionally, this was addressed
by testing numerous design scenarios from integrated physical models coupling
computational fluid dynamics and kinetic modelling. However, this approach presents
computational intractability and numerical instabilities when simulating large-scale systems,
causing time-intensive computing efforts and infeasibility in mathematical optimization.
Therefore, we propose an innovative data-driven surrogate modelling framework which
considerably reduces computing time from months to days by exploiting state-of-the-art deep
learning technology. The framework built upon a few simulated results from the physical
model to learn the sophisticated hydrodynamic and biochemical kinetic mechanisms; then
adopts a hybrid stochastic optimization algorithm to explore untested processes and find
optimal solutions. Through verification, this framework was demonstrated to have
comparable accuracy to the physical model. Moreover, multi-objective optimization was
incorporated to generate a Pareto-frontier for decision-making, advancing its applications in
complex biosystems modelling and optimization