829 research outputs found
Curvature and Optimal Algorithms for Learning and Minimizing Submodular Functions
We investigate three related and important problems connected to machine
learning: approximating a submodular function everywhere, learning a submodular
function (in a PAC-like setting [53]), and constrained minimization of
submodular functions. We show that the complexity of all three problems depends
on the 'curvature' of the submodular function, and provide lower and upper
bounds that refine and improve previous results [3, 16, 18, 52]. Our proof
techniques are fairly generic. We either use a black-box transformation of the
function (for approximation and learning), or a transformation of algorithms to
use an appropriate surrogate function (for minimization). Curiously, curvature
has been known to influence approximations for submodular maximization [7, 55],
but its effect on minimization, approximation and learning has hitherto been
open. We complete this picture, and also support our theoretical claims by
empirical results.Comment: 21 pages. A shorter version appeared in Advances of NIPS-201
Symmetric Submodular Function Minimization Under Hereditary Family Constraints
We present an efficient algorithm to find non-empty minimizers of a symmetric
submodular function over any family of sets closed under inclusion. This for
example includes families defined by a cardinality constraint, a knapsack
constraint, a matroid independence constraint, or any combination of such
constraints. Our algorithm make oracle calls to the submodular
function where is the cardinality of the ground set. In contrast, the
problem of minimizing a general submodular function under a cardinality
constraint is known to be inapproximable within (Svitkina
and Fleischer [2008]).
The algorithm is similar to an algorithm of Nagamochi and Ibaraki [1998] to
find all nontrivial inclusionwise minimal minimizers of a symmetric submodular
function over a set of cardinality using oracle calls. Their
procedure in turn is based on Queyranne's algorithm [1998] to minimize a
symmetric submodularComment: 13 pages, Submitted to SODA 201
Algorithms for Approximate Minimization of the Difference Between Submodular Functions, with Applications
We extend the work of Narasimhan and Bilmes [30] for minimizing set functions
representable as a difference between submodular functions. Similar to [30],
our new algorithms are guaranteed to monotonically reduce the objective
function at every step. We empirically and theoretically show that the
per-iteration cost of our algorithms is much less than [30], and our algorithms
can be used to efficiently minimize a difference between submodular functions
under various combinatorial constraints, a problem not previously addressed. We
provide computational bounds and a hardness result on the mul- tiplicative
inapproximability of minimizing the difference between submodular functions. We
show, however, that it is possible to give worst-case additive bounds by
providing a polynomial time computable lower-bound on the minima. Finally we
show how a number of machine learning problems can be modeled as minimizing the
difference between submodular functions. We experimentally show the validity of
our algorithms by testing them on the problem of feature selection with
submodular cost features.Comment: 17 pages, 8 figures. A shorter version of this appeared in Proc.
Uncertainty in Artificial Intelligence (UAI), Catalina Islands, 201
On the complexity of nonlinear mixed-integer optimization
This is a survey on the computational complexity of nonlinear mixed-integer
optimization. It highlights a selection of important topics, ranging from
incomputability results that arise from number theory and logic, to recently
obtained fully polynomial time approximation schemes in fixed dimension, and to
strongly polynomial-time algorithms for special cases.Comment: 26 pages, 5 figures; to appear in: Mixed-Integer Nonlinear
Optimization, IMA Volumes, Springer-Verla
- …