1,694 research outputs found
Cutting plane methods for general integer programming
Integer programming (IP) problems are difficult to solve due to the integer restrictions imposed on them. A technique for solving these problems is the cutting plane method. In this method, linear constraints are added to the associated linear programming (LP) problem until an integer optimal solution is found. These constraints cut off part of the LP solution space but do not eliminate any feasible integer solution. In this report algorithms for solving IP due to Gomory and to Dantzig are presented. Two other cutting plane approaches and two extensions to Gomory's algorithm are also discussed. Although these methods are mathematically elegant they are known to have slow convergence and an explosive storage requirement. As a result cutting planes are generally not computationally successful
The Lov\'asz Hinge: A Novel Convex Surrogate for Submodular Losses
Learning with non-modular losses is an important problem when sets of
predictions are made simultaneously. The main tools for constructing convex
surrogate loss functions for set prediction are margin rescaling and slack
rescaling. In this work, we show that these strategies lead to tight convex
surrogates iff the underlying loss function is increasing in the number of
incorrect predictions. However, gradient or cutting-plane computation for these
functions is NP-hard for non-supermodular loss functions. We propose instead a
novel surrogate loss function for submodular losses, the Lov\'asz hinge, which
leads to O(p log p) complexity with O(p) oracle accesses to the loss function
to compute a gradient or cutting-plane. We prove that the Lov\'asz hinge is
convex and yields an extension. As a result, we have developed the first
tractable convex surrogates in the literature for submodular losses. We
demonstrate the utility of this novel convex surrogate through several set
prediction tasks, including on the PASCAL VOC and Microsoft COCO datasets
Exact Solution Methods for the -item Quadratic Knapsack Problem
The purpose of this paper is to solve the 0-1 -item quadratic knapsack
problem , a problem of maximizing a quadratic function subject to two
linear constraints. We propose an exact method based on semidefinite
optimization. The semidefinite relaxation used in our approach includes simple
rank one constraints, which can be handled efficiently by interior point
methods. Furthermore, we strengthen the relaxation by polyhedral constraints
and obtain approximate solutions to this semidefinite problem by applying a
bundle method. We review other exact solution methods and compare all these
approaches by experimenting with instances of various sizes and densities.Comment: 12 page
Online and Stochastic Gradient Methods for Non-decomposable Loss Functions
Modern applications in sensitive domains such as biometrics and medicine
frequently require the use of non-decomposable loss functions such as
precision@k, F-measure etc. Compared to point loss functions such as
hinge-loss, these offer much more fine grained control over prediction, but at
the same time present novel challenges in terms of algorithm design and
analysis. In this work we initiate a study of online learning techniques for
such non-decomposable loss functions with an aim to enable incremental learning
as well as design scalable solvers for batch problems. To this end, we propose
an online learning framework for such loss functions. Our model enjoys several
nice properties, chief amongst them being the existence of efficient online
learning algorithms with sublinear regret and online to batch conversion
bounds. Our model is a provable extension of existing online learning models
for point loss functions. We instantiate two popular losses, prec@k and pAUC,
in our model and prove sublinear regret bounds for both of them. Our proofs
require a novel structural lemma over ranked lists which may be of independent
interest. We then develop scalable stochastic gradient descent solvers for
non-decomposable loss functions. We show that for a large family of loss
functions satisfying a certain uniform convergence property (that includes
prec@k, pAUC, and F-measure), our methods provably converge to the empirical
risk minimizer. Such uniform convergence results were not known for these
losses and we establish these using novel proof techniques. We then use
extensive experimentation on real life and benchmark datasets to establish that
our method can be orders of magnitude faster than a recently proposed cutting
plane method.Comment: 25 pages, 3 figures, To appear in the proceedings of the 28th Annual
Conference on Neural Information Processing Systems, NIPS 201
- …