739 research outputs found
A learning-based algorithm to quickly compute good primal solutions for Stochastic Integer Programs
We propose a novel approach using supervised learning to obtain near-optimal
primal solutions for two-stage stochastic integer programming (2SIP) problems
with constraints in the first and second stages. The goal of the algorithm is
to predict a "representative scenario" (RS) for the problem such that,
deterministically solving the 2SIP with the random realization equal to the RS,
gives a near-optimal solution to the original 2SIP. Predicting an RS, instead
of directly predicting a solution ensures first-stage feasibility of the
solution. If the problem is known to have complete recourse, second-stage
feasibility is also guaranteed. For computational testing, we learn to find an
RS for a two-stage stochastic facility location problem with integer variables
and linear constraints in both stages and consistently provide near-optimal
solutions. Our computing times are very competitive with those of
general-purpose integer programming solvers to achieve a similar solution
quality
Improving Efficiency and Scalability of Sum of Squares Optimization: Recent Advances and Limitations
It is well-known that any sum of squares (SOS) program can be cast as a
semidefinite program (SDP) of a particular structure and that therein lies the
computational bottleneck for SOS programs, as the SDPs generated by this
procedure are large and costly to solve when the polynomials involved in the
SOS programs have a large number of variables and degree. In this paper, we
review SOS optimization techniques and present two new methods for improving
their computational efficiency. The first method leverages the sparsity of the
underlying SDP to obtain computational speed-ups. Further improvements can be
obtained if the coefficients of the polynomials that describe the problem have
a particular sparsity pattern, called chordal sparsity. The second method
bypasses semidefinite programming altogether and relies instead on solving a
sequence of more tractable convex programs, namely linear and second order cone
programs. This opens up the question as to how well one can approximate the
cone of SOS polynomials by second order representable cones. In the last part
of the paper, we present some recent negative results related to this question.Comment: Tutorial for CDC 201
Predicting Accurate Lagrangian Multipliers for Mixed Integer Linear Programs
Lagrangian relaxation stands among the most efficient approaches for solving
a Mixed Integer Linear Programs (MILP) with difficult constraints. Given any
duals for these constraints, called Lagrangian Multipliers (LMs), it returns a
bound on the optimal value of the MILP, and Lagrangian methods seek the LMs
giving the best such bound. But these methods generally rely on iterative
algorithms resembling gradient descent to maximize the concave piecewise linear
dual function: the computational burden grows quickly with the number of
relaxed constraints. We introduce a deep learning approach that bypasses the
descent, effectively amortizing the local, per instance, optimization. A
probabilistic encoder based on a graph convolutional network computes
high-dimensional representations of relaxed constraints in MILP instances. A
decoder then turns these representations into LMs. We train the encoder and
decoder jointly by directly optimizing the bound obtained from the predicted
multipliers. Numerical experiments show that our approach closes up to 85~\% of
the gap between the continuous relaxation and the best Lagrangian bound, and
provides a high quality warm-start for descent based Lagrangian methods
- …