73 research outputs found

    Reflection methods for user-friendly submodular optimization

    Get PDF
    Recently, it has become evident that submodularity naturally captures widely occurring concepts in machine learning, signal processing and computer vision. Consequently, there is need for efficient optimization procedures for submodular functions, especially for minimization problems. While general submodular minimization is challenging, we propose a new method that exploits existing decomposability of submodular functions. In contrast to previous approaches, our method is neither approximate, nor impractical, nor does it need any cumbersome parameter tuning. Moreover, it is easy to implement and parallelize. A key component of our method is a formulation of the discrete submodular minimization problem as a continuous best approximation problem that is solved through a sequence of reflections, and its solution can be easily thresholded to obtain an optimal discrete solution. This method solves both the continuous and discrete formulations of the problem, and therefore has applications in learning, inference, and reconstruction. In our experiments, we illustrate the benefits of our method on two image segmentation tasks.Comment: Neural Information Processing Systems (NIPS), \'Etats-Unis (2013

    Random Coordinate Descent Methods for Minimizing Decomposable Submodular Functions

    Full text link
    Submodular function minimization is a fundamental optimization problem that arises in several applications in machine learning and computer vision. The problem is known to be solvable in polynomial time, but general purpose algorithms have high running times and are unsuitable for large-scale problems. Recent work have used convex optimization techniques to obtain very practical algorithms for minimizing functions that are sums of ``simple" functions. In this paper, we use random coordinate descent methods to obtain algorithms with faster linear convergence rates and cheaper iteration costs. Compared to alternating projection methods, our algorithms do not rely on full-dimensional vector operations and they converge in significantly fewer iterations

    On the Convergence Rate of Decomposable Submodular Function Minimization

    Full text link
    Submodular functions describe a variety of discrete problems in machine learning, signal processing, and computer vision. However, minimizing submodular functions poses a number of algorithmic challenges. Recent work introduced an easy-to-use, parallelizable algorithm for minimizing submodular functions that decompose as the sum of "simple" submodular functions. Empirically, this algorithm performs extremely well, but no theoretical analysis was given. In this paper, we show that the algorithm converges linearly, and we provide upper and lower bounds on the rate of convergence. Our proof relies on the geometry of submodular polyhedra and draws on results from spectral graph theory.Comment: 17 pages, 3 figure

    Distributed Submodular Minimization over Networks: a Greedy Column Generation Approach

    Get PDF
    Submodular optimization is a special class of combinatorial optimization arising in several machine learning problems, but also in cooperative control of complex systems. In this paper, we consider agents in an asynchronous, unreliable and time-varying directed network that aim at cooperatively solving submodular minimization problems in a fully distributed way. The challenge is that the (submodular) objective set-function is only partially known by agents, that is, each one is able to evaluate the function only for subsets including itself. We propose a distributed algorithm based on a proper linear programming reformulation of the combinatorial problem. Our algorithm builds on a column generation approach in which each agent maintains a local candidate basis and locally generates columns with a suitable greedy inner routine. A key interesting feature of the proposed algorithm is that the pricing problem, which involves an exponential number of constraints, is solved by the agents through a polynomial time greedy algorithm. We prove that the proposed distributed algorithm converges in finite time to an optimal solution of the submodular minimization problem and we corroborate the theoretical results by performing numerical computations on instances of the ss--tt minimum graph cut problem.Comment: 12 pages, 4 figures, 57th IEEE Conference on Decision and Contro
    • …
    corecore