25,352 research outputs found
Melding the Data-Decisions Pipeline: Decision-Focused Learning for Combinatorial Optimization
Creating impact in real-world settings requires artificial intelligence
techniques to span the full pipeline from data, to predictive models, to
decisions. These components are typically approached separately: a machine
learning model is first trained via a measure of predictive accuracy, and then
its predictions are used as input into an optimization algorithm which produces
a decision. However, the loss function used to train the model may easily be
misaligned with the end goal, which is to make the best decisions possible.
Hand-tuning the loss function to align with optimization is a difficult and
error-prone process (which is often skipped entirely).
We focus on combinatorial optimization problems and introduce a general
framework for decision-focused learning, where the machine learning model is
directly trained in conjunction with the optimization algorithm to produce
high-quality decisions. Technically, our contribution is a means of integrating
common classes of discrete optimization problems into deep learning or other
predictive models, which are typically trained via gradient descent. The main
idea is to use a continuous relaxation of the discrete problem to propagate
gradients through the optimization procedure. We instantiate this framework for
two broad classes of combinatorial problems: linear programs and submodular
maximization. Experimental results across a variety of domains show that
decision-focused learning often leads to improved optimization performance
compared to traditional methods. We find that standard measures of accuracy are
not a reliable proxy for a predictive model's utility in optimization, and our
method's ability to specify the true goal as the model's training objective
yields substantial dividends across a range of decision problems.Comment: Full version of paper accepted at AAAI 201
A Combinatorial Solution to Non-Rigid 3D Shape-to-Image Matching
We propose a combinatorial solution for the problem of non-rigidly matching a
3D shape to 3D image data. To this end, we model the shape as a triangular mesh
and allow each triangle of this mesh to be rigidly transformed to achieve a
suitable matching to the image. By penalising the distance and the relative
rotation between neighbouring triangles our matching compromises between image
and shape information. In this paper, we resolve two major challenges: Firstly,
we address the resulting large and NP-hard combinatorial problem with a
suitable graph-theoretic approach. Secondly, we propose an efficient
discretisation of the unbounded 6-dimensional Lie group SE(3). To our knowledge
this is the first combinatorial formulation for non-rigid 3D shape-to-image
matching. In contrast to existing local (gradient descent) optimisation
methods, we obtain solutions that do not require a good initialisation and that
are within a bound of the optimal solution. We evaluate the proposed method on
the two problems of non-rigid 3D shape-to-shape and non-rigid 3D shape-to-image
registration and demonstrate that it provides promising results.Comment: 10 pages, 7 figure
An Algorithmic Theory of Dependent Regularizers, Part 1: Submodular Structure
We present an exploration of the rich theoretical connections between several
classes of regularized models, network flows, and recent results in submodular
function theory. This work unifies key aspects of these problems under a common
theory, leading to novel methods for working with several important models of
interest in statistics, machine learning and computer vision.
In Part 1, we review the concepts of network flows and submodular function
optimization theory foundational to our results. We then examine the
connections between network flows and the minimum-norm algorithm from
submodular optimization, extending and improving several current results. This
leads to a concise representation of the structure of a large class of pairwise
regularized models important in machine learning, statistics and computer
vision.
In Part 2, we describe the full regularization path of a class of penalized
regression problems with dependent variables that includes the graph-guided
LASSO and total variation constrained models. This description also motivates a
practical algorithm. This allows us to efficiently find the regularization path
of the discretized version of TV penalized models. Ultimately, our new
algorithms scale up to high-dimensional problems with millions of variables
- …