33,088 research outputs found
Faster Predict-and-Optimize with Davis-Yin Splitting
In many applications, a combinatorial problem must be repeatedly solved with
similar, but distinct parameters. Yet, the parameters are not directly
observed; only contextual data that correlates with is available. It is
tempting to use a neural network to predict given , but training such a
model requires reconciling the discrete nature of combinatorial optimization
with the gradient-based frameworks used to train neural networks. When the
problem in question is an Integer Linear Program (ILP), one approach to
overcoming this issue is to consider a continuous relaxation of the
combinatorial problem. While existing methods utilizing this approach have
shown to be highly effective on small problems (10-100 variables), they do not
scale well to large problems. In this work, we draw on ideas from modern convex
optimization to design a network and training scheme which scales effortlessly
to problems with thousands of variables
DIFUSCO: Graph-based Diffusion Solvers for Combinatorial Optimization
Neural network-based Combinatorial Optimization (CO) methods have shown
promising results in solving various NP-complete (NPC) problems without relying
on hand-crafted domain knowledge. This paper broadens the current scope of
neural solvers for NPC problems by introducing a new graph-based diffusion
framework, namely DIFUSCO. Our framework casts NPC problems as discrete {0,
1}-vector optimization problems and leverages graph-based denoising diffusion
models to generate high-quality solutions. We investigate two types of
diffusion models with Gaussian and Bernoulli noise, respectively, and devise an
effective inference schedule to enhance the solution quality. We evaluate our
methods on two well-studied NPC combinatorial optimization problems: Traveling
Salesman Problem (TSP) and Maximal Independent Set (MIS). Experimental results
show that DIFUSCO strongly outperforms the previous state-of-the-art neural
solvers, improving the performance gap between ground-truth and neural solvers
from 1.76% to 0.46% on TSP-500, from 2.46% to 1.17% on TSP-1000, and from 3.19%
to 2.58% on TSP10000. For the MIS problem, DIFUSCO outperforms the previous
state-of-the-art neural solver on the challenging SATLIB benchmark. Our code is
available at "https://github.com/Edward-Sun/DIFUSCO"
- …