7 research outputs found

    On the Convergence Rate of Decomposable Submodular Function Minimization

    Full text link
    Submodular functions describe a variety of discrete problems in machine learning, signal processing, and computer vision. However, minimizing submodular functions poses a number of algorithmic challenges. Recent work introduced an easy-to-use, parallelizable algorithm for minimizing submodular functions that decompose as the sum of "simple" submodular functions. Empirically, this algorithm performs extremely well, but no theoretical analysis was given. In this paper, we show that the algorithm converges linearly, and we provide upper and lower bounds on the rate of convergence. Our proof relies on the geometry of submodular polyhedra and draws on results from spectral graph theory.Comment: 17 pages, 3 figure

    Random Coordinate Descent Methods for Minimizing Decomposable Submodular Functions

    Full text link
    Submodular function minimization is a fundamental optimization problem that arises in several applications in machine learning and computer vision. The problem is known to be solvable in polynomial time, but general purpose algorithms have high running times and are unsuitable for large-scale problems. Recent work have used convex optimization techniques to obtain very practical algorithms for minimizing functions that are sums of ``simple" functions. In this paper, we use random coordinate descent methods to obtain algorithms with faster linear convergence rates and cheaper iteration costs. Compared to alternating projection methods, our algorithms do not rely on full-dimensional vector operations and they converge in significantly fewer iterations

    Computing Exact Minimum Cuts Without Knowing the Graph

    Get PDF
    We give query-efficient algorithms for the global min-cut and the s-t cut problem in unweighted, undirected graphs. Our oracle model is inspired by the submodular function minimization problem: on query S subset V, the oracle returns the size of the cut between S and V S. We provide algorithms computing an exact minimum ss-tt cut in GG with ~{O}(n^{5/3}) queries, and computing an exact global minimum cut of G with only ~{O}(n) queries (while learning the graph requires ~{Theta}(n^2) queries)

    Efficient Minimization of Higher Order Submodular Functions using Monotonic Boolean Functions

    Full text link
    Submodular function minimization is a key problem in a wide variety of applications in machine learning, economics, game theory, computer vision, and many others. The general solver has a complexity of O(n3log2n.E+n4logO(1)n)O(n^3 \log^2 n . E +n^4 {\log}^{O(1)} n) where EE is the time required to evaluate the function and nn is the number of variables \cite{Lee2015}. On the other hand, many computer vision and machine learning problems are defined over special subclasses of submodular functions that can be written as the sum of many submodular cost functions defined over cliques containing few variables. In such functions, the pseudo-Boolean (or polynomial) representation \cite{BorosH02} of these subclasses are of degree (or order, or clique size) kk where knk \ll n. In this work, we develop efficient algorithms for the minimization of this useful subclass of submodular functions. To do this, we define novel mapping that transform submodular functions of order kk into quadratic ones. The underlying idea is to use auxiliary variables to model the higher order terms and the transformation is found using a carefully constructed linear program. In particular, we model the auxiliary variables as monotonic Boolean functions, allowing us to obtain a compact transformation using as few auxiliary variables as possible

    Active-set Methods for Submodular Minimization Problems

    Get PDF
    International audienceWe consider the submodular function minimization (SFM) and the quadratic minimization problemsregularized by the Lov'asz extension of the submodular function. These optimization problemsare intimately related; for example,min-cut problems and total variation denoising problems, wherethe cut function is submodular and its Lov'asz extension is given by the associated total variation.When a quadratic loss is regularized by the total variation of a cut function, it thus becomes atotal variation denoising problem and we use the same terminology in this paper for “general” submodularfunctions. We propose a new active-set algorithm for total variation denoising with theassumption of an oracle that solves the corresponding SFM problem. This can be seen as localdescent algorithm over ordered partitions with explicit convergence guarantees. It is more flexiblethan the existing algorithms with the ability for warm-restarts using the solution of a closely relatedproblem. Further, we also consider the case when a submodular function can be decomposed intothe sum of two submodular functions F1 and F2 and assume SFM oracles for these two functions.We propose a new active-set algorithm for total variation denoising (and hence SFM by thresholdingthe solution at zero). This algorithm also performs local descent over ordered partitions and itsability to warm start considerably improves the performance of the algorithm. In the experiments,we compare the performance of the proposed algorithms with state-of-the-art algorithms, showingthat it reduces the calls to SFM oracles
    corecore