7 research outputs found

    Bilevel optimisation with embedded neural networks: Application to scheduling and control integration

    Full text link
    Scheduling problems requires to explicitly account for control considerations in their optimisation. The literature proposes two traditional ways to solve this integrated problem: hierarchical and monolithic. The monolithic approach ignores the control level's objective and incorporates it as a constraint into the upper level at the cost of suboptimality. The hierarchical approach requires solving a mathematically complex bilevel problem with the scheduling acting as the leader and control as the follower. The linking variables between both levels belong to a small subset of scheduling and control decision variables. For this subset of variables, data-driven surrogate models have been used to learn follower responses to different leader decisions. In this work, we propose to use ReLU neural networks for the control level. Consequently, the bilevel problem is collapsed into a single-level MILP that is still able to account for the control level's objective. This single-level MILP reformulation is compared with the monolithic approach and benchmarked against embedding a nonlinear expression of the neural networks into the optimisation. Moreover, a neural network is used to predict control level feasibility. The case studies involve batch reactor and sequential batch process scheduling problems. The proposed methodology finds optimal solutions while largely outperforming both approaches in terms of computational time. Additionally, due to well-developed MILP solvers, adding ReLU neural networks in a MILP form marginally impacts the computational time. The solution's error due to prediction accuracy is correlated with the neural network training error. Overall, we expose how - by using an existing big-M reformulation and being careful about integrating machine learning and optimisation pipelines - we can more efficiently solve the bilevel scheduling-control problem with high accuracy.Comment: 18 page

    Global Optimization of Gaussian processes

    Full text link
    Gaussian processes~(Kriging) are interpolating data-driven models that are frequently applied in various disciplines. Often, Gaussian processes are trained on datasets and are subsequently embedded as surrogate models in optimization problems. These optimization problems are nonconvex and global optimization is desired. However, previous literature observed computational burdens limiting deterministic global optimization to Gaussian processes trained on few data points. We propose a reduced-space formulation for deterministic global optimization with trained Gaussian processes embedded. For optimization, the branch-and-bound solver branches only on the degrees of freedom and McCormick relaxations are propagated through explicit Gaussian process models. The approach also leads to significantly smaller and computationally cheaper subproblems for lower and upper bounding. To further accelerate convergence, we derive envelopes of common covariance functions for GPs and tight relaxations of acquisition functions used in Bayesian optimization including expected improvement, probability of improvement, and lower confidence bound. In total, we reduce computational time by orders of magnitude compared to state-of-the-art methods, thus overcoming previous computational burdens. We demonstrate the performance and scaling of the proposed method and apply it to Bayesian optimization with global optimization of the acquisition function and chance-constrained programming. The Gaussian process models, acquisition functions, and training scripts are available open-source within the "MeLOn - Machine Learning Models for Optimization" toolbox~(https://git.rwth-aachen.de/avt.svt/public/MeLOn)

    Multi-scale membrane process optimization with high-fidelity ion transport models through machine learning

    Full text link
    Innovative membrane technologies optimally integrated into large separation process plants are essential for economical water treatment and disposal. However, the mass transport through membranes is commonly described by nonlinear differential-algebraic mechanistic models at the nano-scale, while the process and its economics range up to large-scale. Thus, the optimal design of membranes in process plants requires decision making across multiple scales, which is not tractable using standard tools. In this work, we embed artificial neural networks~(ANNs) as surrogate models in the deterministic global optimization to bridge the gap of scales. This methodology allows for deterministic global optimization of membrane processes with accurate transport models -- avoiding the utilization of inaccurate approximations through heuristics or short-cut models. The ANNs are trained based on data generated by a one-dimensional extended Nernst-Planck ion transport model and extended to a more accurate two-dimensional distribution of the membrane module, that captures the filtration-related decreasing retention of salt. We simultaneously design the membrane and plant layout yielding optimal membrane module synthesis properties along with the optimal plant design for multiple objectives, feed concentrations, filtration stages, and salt mixtures. The developed process models and the optimization solver are available open-source, enabling computational resource-efficient multi-scale optimization in membrane science

    Data-Driven Mixed-Integer Optimization for Modular Process Intensification

    Get PDF
    High-fidelity computer simulations provide accurate information on complex physical systems. These often involve proprietary codes, if-then operators, or numerical integrators to describe phenomena that cannot be explicitly captured by physics-based algebraic equations. Consequently, the derivatives of the model are either absent or too complicated to compute; thus, the system cannot be directly optimized using derivative-based optimization solvers. Such problems are known as “black-box” systems since the constraints and the objective of the problem cannot be obtained as closed-form equations. One promising approach to optimize black-box systems is surrogate-based optimization. Surrogate-based optimization uses simulation data to construct low-fidelity approximation models. These models are optimized to find an optimal solution. We study several strategies for surrogate-based optimization for nonlinear and mixed-integer nonlinear black-box problems. First, we explore several types of surrogate models, ranging from simple subset selection for regression models to highly complex machine learning models. Second, we propose a novel surrogate-based optimization algorithm for black-box mixed-integer nonlinear programming problems. The algorithm systematically employs data-preprocessing techniques, surrogate model fitting, and optimization-based adaptive sampling to efficiently locate the optimal solution. Finally, a case study on modular carbon capture is presented. Simultaneous process optimization and adsorbent selection are performed to determine the optimal module design. An economic analysis is presented to determine the feasibility of a proposed modular facility.Ph.D

    Design and Optimisation of Oleochemical Processes

    Get PDF
    corecore