374 research outputs found

    Curvature and Optimal Algorithms for Learning and Minimizing Submodular Functions

    Full text link
    We investigate three related and important problems connected to machine learning: approximating a submodular function everywhere, learning a submodular function (in a PAC-like setting [53]), and constrained minimization of submodular functions. We show that the complexity of all three problems depends on the 'curvature' of the submodular function, and provide lower and upper bounds that refine and improve previous results [3, 16, 18, 52]. Our proof techniques are fairly generic. We either use a black-box transformation of the function (for approximation and learning), or a transformation of algorithms to use an appropriate surrogate function (for minimization). Curiously, curvature has been known to influence approximations for submodular maximization [7, 55], but its effect on minimization, approximation and learning has hitherto been open. We complete this picture, and also support our theoretical claims by empirical results.Comment: 21 pages. A shorter version appeared in Advances of NIPS-201

    Submodular Optimization with Submodular Cover and Submodular Knapsack Constraints

    Full text link
    We investigate two new optimization problems -- minimizing a submodular function subject to a submodular lower bound constraint (submodular cover) and maximizing a submodular function subject to a submodular upper bound constraint (submodular knapsack). We are motivated by a number of real-world applications in machine learning including sensor placement and data subset selection, which require maximizing a certain submodular function (like coverage or diversity) while simultaneously minimizing another (like cooperative cost). These problems are often posed as minimizing the difference between submodular functions [14, 35] which is in the worst case inapproximable. We show, however, that by phrasing these problems as constrained optimization, which is more natural for many applications, we achieve a number of bounded approximation guarantees. We also show that both these problems are closely related and an approximation algorithm solving one can be used to obtain an approximation guarantee for the other. We provide hardness results for both problems thus showing that our approximation factors are tight up to log-factors. Finally, we empirically demonstrate the performance and good scalability properties of our algorithms.Comment: 23 pages. A short version of this appeared in Advances of NIPS-201

    The Maximum Traveling Salesman Problem with Submodular Rewards

    Full text link
    In this paper, we look at the problem of finding the tour of maximum reward on an undirected graph where the reward is a submodular function, that has a curvature of κ\kappa, of the edges in the tour. This problem is known to be NP-hard. We analyze two simple algorithms for finding an approximate solution. Both algorithms require O(V3)O(|V|^3) oracle calls to the submodular function. The approximation factors are shown to be 12+κ\frac{1}{2+\kappa} and max{23(2+κ),2/3(1κ)}\max\set{\frac{2}{3(2+\kappa)},2/3(1-\kappa)}, respectively; so the second method has better bounds for low values of κ\kappa. We also look at how these algorithms perform for a directed graph and investigate a method to consider edge costs in addition to rewards. The problem has direct applications in monitoring an environment using autonomous mobile sensors where the sensing reward depends on the path taken. We provide simulation results to empirically evaluate the performance of the algorithms.Comment: Extended version of ACC 2013 submission (including p-system greedy bound with curvature

    Resilient Monotone Submodular Function Maximization

    Full text link
    In this paper, we focus on applications in machine learning, optimization, and control that call for the resilient selection of a few elements, e.g. features, sensors, or leaders, against a number of adversarial denial-of-service attacks or failures. In general, such resilient optimization problems are hard, and cannot be solved exactly in polynomial time, even though they often involve objective functions that are monotone and submodular. Notwithstanding, in this paper we provide the first scalable, curvature-dependent algorithm for their approximate solution, that is valid for any number of attacks or failures, and which, for functions with low curvature, guarantees superior approximation performance. Notably, the curvature has been known to tighten approximations for several non-resilient maximization problems, yet its effect on resilient maximization had hitherto been unknown. We complement our theoretical analyses with supporting empirical evaluations.Comment: Improved suboptimality guarantees on proposed algorithm and corrected typo on Algorithm 1's statemen
    corecore