1 research outputs found
Learning Deterministic Surrogates for Robust Convex QCQPs
Decision-focused learning is a promising development for contextual
optimisation. It enables us to train prediction models that reflect the
contextual sensitivity structure of the problem. However, there have been
limited attempts to extend this paradigm to robust optimisation. We propose a
double implicit layer model for training prediction models with respect to
robust decision loss in uncertain convex quadratically constrained quadratic
programs (QCQP). The first layer solves a deterministic version of the problem,
the second layer evaluates the worst case realisation for an uncertainty set
centred on the observation given the decisions obtained from the first layer.
This enables us to learn model parameterisations that lead to robust decisions
while only solving a simpler deterministic problem at test time. Additionally,
instead of having to solve a robust counterpart we solve two smaller and
potentially easier problems in training. The second layer (worst case problem)
can be seen as a regularisation approach for predict-and-optimise by fitting to
a neighbourhood of problems instead of just a point observation. We motivate
relaxations of the worst-case problem in cases of uncertainty sets that would
otherwise lead to trust region problems, and leverage various relaxations to
deal with uncertain constraints. Both layers are typically strictly convex in
this problem setting and thus have meaningful gradients almost everywhere. We
demonstrate an application of this model on simulated experiments. The method
is an effective regularisation tool for decision-focused learning for uncertain
convex QCQPs.Comment: Under submission at CPAIOR 202