856 research outputs found

    A Combinatorial, Strongly Polynomial-Time Algorithm for Minimizing Submodular Functions

    Full text link
    This paper presents the first combinatorial polynomial-time algorithm for minimizing submodular set functions, answering an open question posed in 1981 by Grotschel, Lovasz, and Schrijver. The algorithm employs a scaling scheme that uses a flow in the complete directed graph on the underlying set with each arc capacity equal to the scaled parameter. The resulting algorithm runs in time bounded by a polynomial in the size of the underlying set and the largest length of the function value. The paper also presents a strongly polynomial-time version that runs in time bounded by a polynomial in the size of the underlying set independent of the function value.Comment: 17 page

    Efficient Minimization of Decomposable Submodular Functions

    Get PDF
    Many combinatorial problems arising in machine learning can be reduced to the problem of minimizing a submodular function. Submodular functions are a natural discrete analog of convex functions, and can be minimized in strongly polynomial time. Unfortunately, state-of-the-art algorithms for general submodular minimization are intractable for larger problems. In this paper, we introduce a novel subclass of submodular minimization problems that we call decomposable. Decomposable submodular functions are those that can be represented as sums of concave functions applied to modular functions. We develop an algorithm, SLG, that can efficiently minimize decomposable submodular functions with tens of thousands of variables. Our algorithm exploits recent results in smoothed convex minimization. We apply SLG to synthetic benchmarks and a joint classification-and-segmentation task, and show that it outperforms the state-of-the-art general purpose submodular minimization algorithms by several orders of magnitude.Comment: Expanded version of paper for Neural Information Processing Systems 201

    A simple combinatorial algorithm for submodular function minimization

    Get PDF
    This paper presents a new simple algorithm for minimizing submodular functions. For integer valued submodular functions, the algorithm runs in O(n6EO log nM) [O (n superscript 6 E O log nM)] time, where n is the cardinality of the ground set, M is the maximum absolute value of the function value, and EO is the time for function evaluation. The algorithm can be improved to run in O ((n4EO+n5)log nM) [O ((n superscript 4 EO + n superscript 5) log nM)] time. The strongly polynomial version of this faster algorithm runs in O((n5EO + n6) log n) [O ((n superscript 5 EO + n superscript 6) log n)] time for real valued general submodular functions. These are comparable to the best known running time bounds for submodular function minimization. The algorithm can also be implemented in strongly polynomial time using only additions, subtractions, comparisons, and the oracle calls for function evaluation. This is the first fully combinatorial submodular function minimization algorithm that does not rely on the scaling method.United States. Office of Naval Research ( ONR grant N00014-08-1-0029

    Algorithms for Approximate Minimization of the Difference Between Submodular Functions, with Applications

    Full text link
    We extend the work of Narasimhan and Bilmes [30] for minimizing set functions representable as a difference between submodular functions. Similar to [30], our new algorithms are guaranteed to monotonically reduce the objective function at every step. We empirically and theoretically show that the per-iteration cost of our algorithms is much less than [30], and our algorithms can be used to efficiently minimize a difference between submodular functions under various combinatorial constraints, a problem not previously addressed. We provide computational bounds and a hardness result on the mul- tiplicative inapproximability of minimizing the difference between submodular functions. We show, however, that it is possible to give worst-case additive bounds by providing a polynomial time computable lower-bound on the minima. Finally we show how a number of machine learning problems can be modeled as minimizing the difference between submodular functions. We experimentally show the validity of our algorithms by testing them on the problem of feature selection with submodular cost features.Comment: 17 pages, 8 figures. A shorter version of this appeared in Proc. Uncertainty in Artificial Intelligence (UAI), Catalina Islands, 201

    Symmetric Submodular Function Minimization Under Hereditary Family Constraints

    Full text link
    We present an efficient algorithm to find non-empty minimizers of a symmetric submodular function over any family of sets closed under inclusion. This for example includes families defined by a cardinality constraint, a knapsack constraint, a matroid independence constraint, or any combination of such constraints. Our algorithm make O(n3)O(n^3) oracle calls to the submodular function where nn is the cardinality of the ground set. In contrast, the problem of minimizing a general submodular function under a cardinality constraint is known to be inapproximable within o(n/logn)o(\sqrt{n/\log n}) (Svitkina and Fleischer [2008]). The algorithm is similar to an algorithm of Nagamochi and Ibaraki [1998] to find all nontrivial inclusionwise minimal minimizers of a symmetric submodular function over a set of cardinality nn using O(n3)O(n^3) oracle calls. Their procedure in turn is based on Queyranne's algorithm [1998] to minimize a symmetric submodularComment: 13 pages, Submitted to SODA 201

    Curvature and Optimal Algorithms for Learning and Minimizing Submodular Functions

    Full text link
    We investigate three related and important problems connected to machine learning: approximating a submodular function everywhere, learning a submodular function (in a PAC-like setting [53]), and constrained minimization of submodular functions. We show that the complexity of all three problems depends on the 'curvature' of the submodular function, and provide lower and upper bounds that refine and improve previous results [3, 16, 18, 52]. Our proof techniques are fairly generic. We either use a black-box transformation of the function (for approximation and learning), or a transformation of algorithms to use an appropriate surrogate function (for minimization). Curiously, curvature has been known to influence approximations for submodular maximization [7, 55], but its effect on minimization, approximation and learning has hitherto been open. We complete this picture, and also support our theoretical claims by empirical results.Comment: 21 pages. A shorter version appeared in Advances of NIPS-201

    Reflection methods for user-friendly submodular optimization

    Get PDF
    Recently, it has become evident that submodularity naturally captures widely occurring concepts in machine learning, signal processing and computer vision. Consequently, there is need for efficient optimization procedures for submodular functions, especially for minimization problems. While general submodular minimization is challenging, we propose a new method that exploits existing decomposability of submodular functions. In contrast to previous approaches, our method is neither approximate, nor impractical, nor does it need any cumbersome parameter tuning. Moreover, it is easy to implement and parallelize. A key component of our method is a formulation of the discrete submodular minimization problem as a continuous best approximation problem that is solved through a sequence of reflections, and its solution can be easily thresholded to obtain an optimal discrete solution. This method solves both the continuous and discrete formulations of the problem, and therefore has applications in learning, inference, and reconstruction. In our experiments, we illustrate the benefits of our method on two image segmentation tasks.Comment: Neural Information Processing Systems (NIPS), \'Etats-Unis (2013

    Random Coordinate Descent Methods for Minimizing Decomposable Submodular Functions

    Full text link
    Submodular function minimization is a fundamental optimization problem that arises in several applications in machine learning and computer vision. The problem is known to be solvable in polynomial time, but general purpose algorithms have high running times and are unsuitable for large-scale problems. Recent work have used convex optimization techniques to obtain very practical algorithms for minimizing functions that are sums of ``simple" functions. In this paper, we use random coordinate descent methods to obtain algorithms with faster linear convergence rates and cheaper iteration costs. Compared to alternating projection methods, our algorithms do not rely on full-dimensional vector operations and they converge in significantly fewer iterations
    corecore