338 research outputs found

    Implementation of warm-start strategies in interior-point methods for linear programming in fixed dimension

    Get PDF
    We implement several warm-start strategies in interior-point methods for linear programming (LP). We study the situation in which both the original LP instance and the perturbed one have exactly the same dimensions. We consider different types of perturbations of data components of the original instance and different sizes of each type of perturbation. We modify the state-of-the-art interior-point solver PCx in our implementation. We evaluate the effectiveness of each warm-start strategy based on the number of iterations and the computation time in comparison with "cold start" on the NETLIB test suite. Our experiments reveal that each of the warm-start strategies leads to a reduction in the number of interior-point iterations especially for smaller perturbations and for perturbations of fewer data components in comparison with cold start. On the other hand, only one of the warm-start strategies exhibits better performance than cold start in terms of computation time. Based on the insight gained from the computational results, we discuss several potential improvements to enhance the performances of such warm-start strategies. © 2007 Springer Science+Business Media, LLC

    Fast Non-Parametric Learning to Accelerate Mixed-Integer Programming for Online Hybrid Model Predictive Control

    Full text link
    Today's fast linear algebra and numerical optimization tools have pushed the frontier of model predictive control (MPC) forward, to the efficient control of highly nonlinear and hybrid systems. The field of hybrid MPC has demonstrated that exact optimal control law can be computed, e.g., by mixed-integer programming (MIP) under piecewise-affine (PWA) system models. Despite the elegant theory, online solving hybrid MPC is still out of reach for many applications. We aim to speed up MIP by combining geometric insights from hybrid MPC, a simple-yet-effective learning algorithm, and MIP warm start techniques. Following a line of work in approximate explicit MPC, the proposed learning-control algorithm, LNMS, gains computational advantage over MIP at little cost and is straightforward for practitioners to implement

    Learning to Warm-Start Fixed-Point Optimization Algorithms

    Full text link
    We introduce a machine-learning framework to warm-start fixed-point optimization algorithms. Our architecture consists of a neural network mapping problem parameters to warm starts, followed by a predefined number of fixed-point iterations. We propose two loss functions designed to either minimize the fixed-point residual or the distance to a ground truth solution. In this way, the neural network predicts warm starts with the end-to-end goal of minimizing the downstream loss. An important feature of our architecture is its flexibility, in that it can predict a warm start for fixed-point algorithms run for any number of steps, without being limited to the number of steps it has been trained on. We provide PAC-Bayes generalization bounds on unseen data for common classes of fixed-point operators: contractive, linearly convergent, and averaged. Applying this framework to well-known applications in control, statistics, and signal processing, we observe a significant reduction in the number of iterations and solution time required to solve these problems, through learned warm starts

    Advances in Interior Point Methods for Large-Scale Linear Programming

    Get PDF
    This research studies two computational techniques that improve the practical performance of existing implementations of interior point methods for linear programming. Both are based on the concept of symmetric neighbourhood as the driving tool for the analysis of the good performance of some practical algorithms. The symmetric neighbourhood adds explicit upper bounds on the complementarity pairs, besides the lower bound already present in the common N−1 neighbourhood. This allows the algorithm to keep under control the spread among complementarity pairs and reduce it with the barrier parameter ÎŒ. We show that a long-step feasible algorithm based on this neighbourhood is globally convergent and converges in O(nL) iterations. The use of the symmetric neighbourhood and the recent theoretical under- standing of the behaviour of Mehrotra’s corrector direction motivate the introduction of a weighting mechanism that can be applied to any corrector direction, whether originating from Mehrotra’s predictor–corrector algorithm or as part of the multiple centrality correctors technique. Such modification in the way a correction is applied aims to ensure that any computed search direction can positively contribute to a successful iteration by increasing the overall stepsize, thus avoid- ing the case that a corrector is rejected. The usefulness of the weighting strategy is documented through complete numerical experiments on various sets of publicly available test problems. The implementation within the hopdm interior point code shows remarkable time savings for large-scale linear programming problems. The second technique develops an efficient way of constructing a starting point for structured large-scale stochastic linear programs. We generate a computation- ally viable warm-start point by solving to low accuracy a stochastic problem of much smaller dimension. The reduced problem is the deterministic equivalent program corresponding to an event tree composed of a restricted number of scenarios. The solution to the reduced problem is then expanded to the size of the problem instance, and used to initialise the interior point algorithm. We present theoretical conditions that the warm-start iterate has to satisfy in order to be successful. We implemented this technique in both the hopdm and the oops frameworks, and its performance is verified through a series of tests on problem instances coming from various stochastic programming sources
    • 

    corecore