182 research outputs found
Reflection methods for user-friendly submodular optimization
Recently, it has become evident that submodularity naturally captures widely
occurring concepts in machine learning, signal processing and computer vision.
Consequently, there is need for efficient optimization procedures for
submodular functions, especially for minimization problems. While general
submodular minimization is challenging, we propose a new method that exploits
existing decomposability of submodular functions. In contrast to previous
approaches, our method is neither approximate, nor impractical, nor does it
need any cumbersome parameter tuning. Moreover, it is easy to implement and
parallelize. A key component of our method is a formulation of the discrete
submodular minimization problem as a continuous best approximation problem that
is solved through a sequence of reflections, and its solution can be easily
thresholded to obtain an optimal discrete solution. This method solves both the
continuous and discrete formulations of the problem, and therefore has
applications in learning, inference, and reconstruction. In our experiments, we
illustrate the benefits of our method on two image segmentation tasks.Comment: Neural Information Processing Systems (NIPS), \'Etats-Unis (2013
Potts model, parametric maxflow and k-submodular functions
The problem of minimizing the Potts energy function frequently occurs in
computer vision applications. One way to tackle this NP-hard problem was
proposed by Kovtun [19,20]. It identifies a part of an optimal solution by
running maxflow computations, where is the number of labels. The number
of "labeled" pixels can be significant in some applications, e.g. 50-93% in our
tests for stereo. We show how to reduce the runtime to maxflow
computations (or one {\em parametric maxflow} computation). Furthermore, the
output of our algorithm allows to speed-up the subsequent alpha expansion for
the unlabeled part, or can be used as it is for time-critical applications.
To derive our technique, we generalize the algorithm of Felzenszwalb et al.
[7] for {\em Tree Metrics}. We also show a connection to {\em -submodular
functions} from combinatorial optimization, and discuss {\em -submodular
relaxations} for general energy functions.Comment: Accepted to ICCV 201
Some recent results in the analysis of greedy algorithms for assignment problems
We survey some recent developments in the analysis of greedy algorithms for assignment and transportation problems. We focus on the linear programming model for matroids and linear assignment problems with Monge property, on general linear programs, probabilistic analysis for linear assignment and makespan minimization, and on-line algorithms for linear and non-linear assignment problems
Efficient Flow-based Approximation Algorithms for Submodular Hypergraph Partitioning via a Generalized Cut-Matching Game
In the past 20 years, increasing complexity in real world data has lead to
the study of higher-order data models based on partitioning hypergraphs.
However, hypergraph partitioning admits multiple formulations as hyperedges can
be cut in multiple ways. Building upon a class of hypergraph partitioning
problems introduced by Li & Milenkovic, we study the problem of minimizing
ratio-cut objectives over hypergraphs given by a new class of cut functions,
monotone submodular cut functions (mscf's), which captures hypergraph expansion
and conductance as special cases.
We first define the ratio-cut improvement problem, a family of local
relaxations of the minimum ratio-cut problem. This problem is a natural
extension of the Andersen & Lang cut improvement problem to the hypergraph
setting. We demonstrate the existence of efficient algorithms for approximately
solving this problem. These algorithms run in almost-linear time for the case
of hypergraph expansion, and when the hypergraph rank is at most .
Next, we provide an efficient -approximation algorithm for finding
the minimum ratio-cut of . We generalize the cut-matching game framework of
Khandekar et. al. to allow for the cut player to play unbalanced cuts, and
matching player to route approximate single-commodity flows. Using this
framework, we bootstrap our algorithms for the ratio-cut improvement problem to
obtain approximation algorithms for minimum ratio-cut problem for all mscf's.
This also yields the first almost-linear time -approximation
algorithms for hypergraph expansion, and constant hypergraph rank.
Finally, we extend a result of Louis & Makarychev to a broader set of
objective functions by giving a polynomial time -approximation algorithm for the minimum ratio-cut problem based on
rounding -metric embeddings.Comment: Comments and feedback welcom
On the complexity of the dual method for maximum balanced flows
AbstractIn an earlier paper we develop a quite general dual method and apply it to balanced submodular flow problems with flow values in modules. Here, we analyze that method for the particular case of balanced flows with rational or integral flow values in more detail. While, for integral flows, the general problem turns out to be NP-hard, the method is strongly polynomial for rational as well as for integral flows when applied to the motivating reliability problem given by Minoux. In that case, a maximum balanced flow is determined in O(m · M(m, n)), where M(m, n) is the complexity of some maxflow procedure for a network with n vertices and m arcs
Convex and Network Flow Optimization for Structured Sparsity
We consider a class of learning problems regularized by a structured
sparsity-inducing norm defined as the sum of l_2- or l_infinity-norms over
groups of variables. Whereas much effort has been put in developing fast
optimization techniques when the groups are disjoint or embedded in a
hierarchy, we address here the case of general overlapping groups. To this end,
we present two different strategies: On the one hand, we show that the proximal
operator associated with a sum of l_infinity-norms can be computed exactly in
polynomial time by solving a quadratic min-cost flow problem, allowing the use
of accelerated proximal gradient methods. On the other hand, we use proximal
splitting techniques, and address an equivalent formulation with
non-overlapping groups, but in higher dimension and with additional
constraints. We propose efficient and scalable algorithms exploiting these two
strategies, which are significantly faster than alternative approaches. We
illustrate these methods with several problems such as CUR matrix
factorization, multi-task learning of tree-structured dictionaries, background
subtraction in video sequences, image denoising with wavelets, and topographic
dictionary learning of natural image patches.Comment: to appear in the Journal of Machine Learning Research (JMLR
- …