756 research outputs found

    Inverse Optimization with Noisy Data

    Full text link
    Inverse optimization refers to the inference of unknown parameters of an optimization problem based on knowledge of its optimal solutions. This paper considers inverse optimization in the setting where measurements of the optimal solutions of a convex optimization problem are corrupted by noise. We first provide a formulation for inverse optimization and prove it to be NP-hard. In contrast to existing methods, we show that the parameter estimates produced by our formulation are statistically consistent. Our approach involves combining a new duality-based reformulation for bilevel programs with a regularization scheme that smooths discontinuities in the formulation. Using epi-convergence theory, we show the regularization parameter can be adjusted to approximate the original inverse optimization problem to arbitrary accuracy, which we use to prove our consistency results. Next, we propose two solution algorithms based on our duality-based formulation. The first is an enumeration algorithm that is applicable to settings where the dimensionality of the parameter space is modest, and the second is a semiparametric approach that combines nonparametric statistics with a modified version of our formulation. These numerical algorithms are shown to maintain the statistical consistency of the underlying formulation. Lastly, using both synthetic and real data, we demonstrate that our approach performs competitively when compared with existing heuristics

    A Framework for Generalized Benders' Decomposition and Its Application to Multilevel Optimization

    Full text link
    We describe a framework for reformulating and solving optimization problems that generalizes the well-known framework originally introduced by Benders. We discuss details of the application of the procedures to several classes of optimization problems that fall under the umbrella of multilevel/multistage mixed integer linear optimization problems. The application of this abstract framework to this broad class of problems provides new insights and a broader interpretation of the core ideas, especially as they relate to duality and the value functions of optimization problems that arise in this context

    Integer Bilevel Linear Programming Problems: New Results and Applications

    Get PDF
    Integer Bilevel Linear Programming Problems: New Results and Application

    Integer Bilevel Linear Programming Problems: New Results and Applications

    Get PDF
    Integer Bilevel Linear Programming Problems: New Results and Application

    Bilevel optimisation with embedded neural networks: Application to scheduling and control integration

    Full text link
    Scheduling problems requires to explicitly account for control considerations in their optimisation. The literature proposes two traditional ways to solve this integrated problem: hierarchical and monolithic. The monolithic approach ignores the control level's objective and incorporates it as a constraint into the upper level at the cost of suboptimality. The hierarchical approach requires solving a mathematically complex bilevel problem with the scheduling acting as the leader and control as the follower. The linking variables between both levels belong to a small subset of scheduling and control decision variables. For this subset of variables, data-driven surrogate models have been used to learn follower responses to different leader decisions. In this work, we propose to use ReLU neural networks for the control level. Consequently, the bilevel problem is collapsed into a single-level MILP that is still able to account for the control level's objective. This single-level MILP reformulation is compared with the monolithic approach and benchmarked against embedding a nonlinear expression of the neural networks into the optimisation. Moreover, a neural network is used to predict control level feasibility. The case studies involve batch reactor and sequential batch process scheduling problems. The proposed methodology finds optimal solutions while largely outperforming both approaches in terms of computational time. Additionally, due to well-developed MILP solvers, adding ReLU neural networks in a MILP form marginally impacts the computational time. The solution's error due to prediction accuracy is correlated with the neural network training error. Overall, we expose how - by using an existing big-M reformulation and being careful about integrating machine learning and optimisation pipelines - we can more efficiently solve the bilevel scheduling-control problem with high accuracy.Comment: 18 page
    • …
    corecore