26 research outputs found

    Random Coordinate Descent Methods for Minimizing Decomposable Submodular Functions

    Full text link
    Submodular function minimization is a fundamental optimization problem that arises in several applications in machine learning and computer vision. The problem is known to be solvable in polynomial time, but general purpose algorithms have high running times and are unsuitable for large-scale problems. Recent work have used convex optimization techniques to obtain very practical algorithms for minimizing functions that are sums of ``simple" functions. In this paper, we use random coordinate descent methods to obtain algorithms with faster linear convergence rates and cheaper iteration costs. Compared to alternating projection methods, our algorithms do not rely on full-dimensional vector operations and they converge in significantly fewer iterations

    Ranking with Submodular Valuations

    Full text link
    We study the problem of ranking with submodular valuations. An instance of this problem consists of a ground set [m][m], and a collection of nn monotone submodular set functions f1,,fnf^1, \ldots, f^n, where each fi:2[m]R+f^i: 2^{[m]} \to R_+. An additional ingredient of the input is a weight vector wR+nw \in R_+^n. The objective is to find a linear ordering of the ground set elements that minimizes the weighted cover time of the functions. The cover time of a function is the minimal number of elements in the prefix of the linear ordering that form a set whose corresponding function value is greater than a unit threshold value. Our main contribution is an O(ln(1/ϵ))O(\ln(1 / \epsilon))-approximation algorithm for the problem, where ϵ\epsilon is the smallest non-zero marginal value that any function may gain from some element. Our algorithm orders the elements using an adaptive residual updates scheme, which may be of independent interest. We also prove that the problem is Ω(ln(1/ϵ))\Omega(\ln(1 / \epsilon))-hard to approximate, unless P = NP. This implies that the outcome of our algorithm is optimal up to constant factors.Comment: 16 pages, 3 figure

    New Query Lower Bounds for Submodular Function Minimization

    Get PDF
    We consider submodular function minimization in the oracle model: given black-box access to a submodular set function f:2[n]Rf:2^{[n]}\rightarrow \mathbb{R}, find an element of argminS{f(S)}\arg\min_S \{f(S)\} using as few queries to f()f(\cdot) as possible. State-of-the-art algorithms succeed with O~(n2)\tilde{O}(n^2) queries [LeeSW15], yet the best-known lower bound has never been improved beyond nn [Harvey08]. We provide a query lower bound of 2n2n for submodular function minimization, a 3n/223n/2-2 query lower bound for the non-trivial minimizer of a symmetric submodular function, and a (n2)\binom{n}{2} query lower bound for the non-trivial minimizer of an asymmetric submodular function. Our 3n/223n/2-2 lower bound results from a connection between SFM lower bounds and a novel concept we term the cut dimension of a graph. Interestingly, this yields a 3n/223n/2-2 cut-query lower bound for finding the global mincut in an undirected, weighted graph, but we also prove it cannot yield a lower bound better than n+1n+1 for ss-tt mincut, even in a directed, weighted graph

    On the Convergence Rate of Decomposable Submodular Function Minimization

    Full text link
    Submodular functions describe a variety of discrete problems in machine learning, signal processing, and computer vision. However, minimizing submodular functions poses a number of algorithmic challenges. Recent work introduced an easy-to-use, parallelizable algorithm for minimizing submodular functions that decompose as the sum of "simple" submodular functions. Empirically, this algorithm performs extremely well, but no theoretical analysis was given. In this paper, we show that the algorithm converges linearly, and we provide upper and lower bounds on the rate of convergence. Our proof relies on the geometry of submodular polyhedra and draws on results from spectral graph theory.Comment: 17 pages, 3 figure

    A simple combinatorial algorithm for submodular function minimization

    Get PDF
    This paper presents a new simple algorithm for minimizing submodular functions. For integer valued submodular functions, the algorithm runs in O(n6EO log nM) [O (n superscript 6 E O log nM)] time, where n is the cardinality of the ground set, M is the maximum absolute value of the function value, and EO is the time for function evaluation. The algorithm can be improved to run in O ((n4EO+n5)log nM) [O ((n superscript 4 EO + n superscript 5) log nM)] time. The strongly polynomial version of this faster algorithm runs in O((n5EO + n6) log n) [O ((n superscript 5 EO + n superscript 6) log n)] time for real valued general submodular functions. These are comparable to the best known running time bounds for submodular function minimization. The algorithm can also be implemented in strongly polynomial time using only additions, subtractions, comparisons, and the oracle calls for function evaluation. This is the first fully combinatorial submodular function minimization algorithm that does not rely on the scaling method.United States. Office of Naval Research ( ONR grant N00014-08-1-0029

    On the complexity of nonlinear mixed-integer optimization

    Full text link
    This is a survey on the computational complexity of nonlinear mixed-integer optimization. It highlights a selection of important topics, ranging from incomputability results that arise from number theory and logic, to recently obtained fully polynomial time approximation schemes in fixed dimension, and to strongly polynomial-time algorithms for special cases.Comment: 26 pages, 5 figures; to appear in: Mixed-Integer Nonlinear Optimization, IMA Volumes, Springer-Verla

    Curvature and Optimal Algorithms for Learning and Minimizing Submodular Functions

    Full text link
    We investigate three related and important problems connected to machine learning: approximating a submodular function everywhere, learning a submodular function (in a PAC-like setting [53]), and constrained minimization of submodular functions. We show that the complexity of all three problems depends on the 'curvature' of the submodular function, and provide lower and upper bounds that refine and improve previous results [3, 16, 18, 52]. Our proof techniques are fairly generic. We either use a black-box transformation of the function (for approximation and learning), or a transformation of algorithms to use an appropriate surrogate function (for minimization). Curiously, curvature has been known to influence approximations for submodular maximization [7, 55], but its effect on minimization, approximation and learning has hitherto been open. We complete this picture, and also support our theoretical claims by empirical results.Comment: 21 pages. A shorter version appeared in Advances of NIPS-201

    Minimizing a sum of submodular functions

    Get PDF
    We consider the problem of minimizing a function represented as a sum of submodular terms. We assume each term allows an efficient computation of {\em exchange capacities}. This holds, for example, for terms depending on a small number of variables, or for certain cardinality-dependent terms. A naive application of submodular minimization algorithms would not exploit the existence of specialized exchange capacity subroutines for individual terms. To overcome this, we cast the problem as a {\em submodular flow} (SF) problem in an auxiliary graph, and show that applying most existing SF algorithms would rely only on these subroutines. We then explore in more detail Iwata's capacity scaling approach for submodular flows (Math. Programming, 76(2):299--308, 1997). In particular, we show how to improve its complexity in the case when the function contains cardinality-dependent terms.Comment: accepted to "Discrete Applied Mathematics
    corecore