2,472 research outputs found

    Probably Approximately Correct Greedy Maximization with Efficient Bounds on Information Gain for Sensor Selection

    Full text link
    Submodular function maximization finds application in a variety of real-world decision-making problems. However, most existing methods, based on greedy maximization, assume it is computationally feasible to evaluate F, the function being maximized. Unfortunately, in many realistic settings F is too expensive to evaluate exactly even once. We present probably approximately correct greedy maximization, which requires access only to cheap anytime confidence bounds on F and uses them to prune elements. We show that, with high probability, our method returns an approximately optimal set. We propose novel, cheap confidence bounds for conditional entropy, which appears in many common choices of F and for which it is difficult to find unbiased or bounded estimates. Finally, results on a real-world dataset from a multi-camera tracking system in a shopping mall demonstrate that our approach performs comparably to existing methods, but at a fraction of the computational cost

    Distributed Submodular Maximization with Limited Information

    Full text link
    We consider a class of distributed submodular maximization problems in which each agent must choose a single strategy from its strategy set. The global objective is to maximize a submodular function of the strategies chosen by each agent. When choosing a strategy, each agent has access to only a limited number of other agents' choices. For each of its strategies, an agent can evaluate its marginal contribution to the global objective given its information. The main objective is to investigate how this limitation of information about the strategies chosen by other agents affects the performance when agents make choices according to a local greedy algorithm. In particular, we provide lower bounds on the performance of greedy algorithms for submodular maximization, which depend on the clique number of a graph that captures the information structure. We also characterize graph-theoretic upper bounds in terms of the chromatic number of the graph. Finally, we demonstrate how certain graph properties limit the performance of the greedy algorithm. Simulations on several common models for random networks demonstrate our results.Comment: 11 pages, 8 figure

    A Memoization Framework for Scaling Submodular Optimization to Large Scale Problems

    Full text link
    We are motivated by large scale submodular optimization problems, where standard algorithms that treat the submodular functions in the \emph{value oracle model} do not scale. In this paper, we present a model called the \emph{precomputational complexity model}, along with a unifying memoization based framework, which looks at the specific form of the given submodular function. A key ingredient in this framework is the notion of a \emph{precomputed statistic}, which is maintained in the course of the algorithms. We show that we can easily integrate this idea into a large class of submodular optimization problems including constrained and unconstrained submodular maximization, minimization, difference of submodular optimization, optimization with submodular constraints and several other related optimization problems. Moreover, memoization can be integrated in both discrete and continuous relaxation flavors of algorithms for these problems. We demonstrate this idea for several commonly occurring submodular functions, and show how the precomputational model provides significant speedups compared to the value oracle model. Finally, we empirically demonstrate this for large scale machine learning problems of data subset selection and summarization.Comment: To Appear in Proc. AISTATS 201

    Conditional Gradient Method for Stochastic Submodular Maximization: Closing the Gap

    Full text link
    In this paper, we study the problem of \textit{constrained} and \textit{stochastic} continuous submodular maximization. Even though the objective function is not concave (nor convex) and is defined in terms of an expectation, we develop a variant of the conditional gradient method, called \alg, which achieves a \textit{tight} approximation guarantee. More precisely, for a monotone and continuous DR-submodular function and subject to a \textit{general} convex body constraint, we prove that \alg achieves a [(1-1/e)\text{OPT} -\eps] guarantee (in expectation) with \mathcal{O}{(1/\eps^3)} stochastic gradient computations. This guarantee matches the known hardness results and closes the gap between deterministic and stochastic continuous submodular maximization. By using stochastic continuous optimization as an interface, we also provide the first (11/e)(1-1/e) tight approximation guarantee for maximizing a \textit{monotone but stochastic} submodular \textit{set} function subject to a general matroid constraint

    Learning and Optimization with Submodular Functions

    Full text link
    In many naturally occurring optimization problems one needs to ensure that the definition of the optimization problem lends itself to solutions that are tractable to compute. In cases where exact solutions cannot be computed tractably, it is beneficial to have strong guarantees on the tractable approximate solutions. In order operate under these criterion most optimization problems are cast under the umbrella of convexity or submodularity. In this report we will study design and optimization over a common class of functions called submodular functions. Set functions, and specifically submodular set functions, characterize a wide variety of naturally occurring optimization problems, and the property of submodularity of set functions has deep theoretical consequences with wide ranging applications. Informally, the property of submodularity of set functions concerns the intuitive "principle of diminishing returns. This property states that adding an element to a smaller set has more value than adding it to a larger set. Common examples of submodular monotone functions are entropies, concave functions of cardinality, and matroid rank functions; non-monotone examples include graph cuts, network flows, and mutual information. In this paper we will review the formal definition of submodularity; the optimization of submodular functions, both maximization and minimization; and finally discuss some applications in relation to learning and reasoning using submodular functions.Comment: Tech Report - USC Computer Science CS-599, Convex and Combinatorial Optimizatio

    Differentiable Greedy Submodular Maximization: Guarantees, Gradient Estimators, and Applications

    Full text link
    Motivated by, e.g., sensitivity analysis and end-to-end learning, the demand for differentiable optimization algorithms has been significantly increasing. In this paper, we establish a theoretically guaranteed versatile framework that makes the greedy algorithm for monotone submodular function maximization differentiable. We smooth the greedy algorithm via randomization, and prove that it almost recovers original approximation guarantees in expectation for the cases of cardinality and κ\kappa-extensible system constrains. We also show how to efficiently compute unbiased gradient estimators of any expected output-dependent quantities. We demonstrate the usefulness of our framework by instantiating it for various applications

    Adaptive Sequence Submodularity

    Full text link
    In many machine learning applications, one needs to interactively select a sequence of items (e.g., recommending movies based on a user's feedback) or make sequential decisions in a certain order (e.g., guiding an agent through a series of states). Not only do sequences already pose a dauntingly large search space, but we must also take into account past observations, as well as the uncertainty of future outcomes. Without further structure, finding an optimal sequence is notoriously challenging, if not completely intractable. In this paper, we view the problem of adaptive and sequential decision making through the lens of submodularity and propose an adaptive greedy policy with strong theoretical guarantees. Additionally, to demonstrate the practical utility of our results, we run experiments on Amazon product recommendation and Wikipedia link prediction tasks

    Maximizing Influence in Social Networks: A Two-Stage Stochastic Programming Approach That Exploits Submodularity

    Full text link
    We consider stochastic influence maximization problems arising in social networks. In contrast to existing studies that involve greedy approximation algorithms with a 63% performance guarantee, our work focuses on solving the problem optimally. To this end, we introduce a new class of problems that we refer to as two-stage stochastic submodular optimization models. We propose a delayed constraint generation algorithm to find the optimal solution to this class of problems with a finite number of samples. The influence maximization problems of interest are special cases of this general problem class. We show that the submodularity of the influence function can be exploited to develop strong optimality cuts that are more effective than the standard optimality cuts available in the literature. Finally, we report our computational experiments with large-scale real-world datasets for two fundamental influence maximization problems, independent cascade and linear threshold, and show that our proposed algorithm outperforms the greedy algorithm

    Optimal approximation for submodular and supermodular optimization with bounded curvature

    Full text link
    We design new approximation algorithms for the problems of optimizing submodular and supermodular functions subject to a single matroid constraint. Specifically, we consider the case in which we wish to maximize a nondecreasing submodular function or minimize a nonincreasing supermodular function in the setting of bounded total curvature cc. In the case of submodular maximization with curvature cc, we obtain a (1c/e)(1-c/e)-approximation --- the first improvement over the greedy (1ec)/c(1-e^{-c})/c-approximation of Conforti and Cornuejols from 1984, which holds for a cardinality constraint, as well as recent approaches that hold for an arbitrary matroid constraint. Our approach is based on modifications of the continuous greedy algorithm and non-oblivious local search, and allows us to approximately maximize the sum of a nonnegative, nondecreasing submodular function and a (possibly negative) linear function. We show how to reduce both submodular maximization and supermodular minimization to this general problem when the objective function has bounded total curvature. We prove that the approximation results we obtain are the best possible in the value oracle model, even in the case of a cardinality constraint. We define an extension of the notion of curvature to general monotone set functions and show (1c)(1-c)-approximation for maximization and 1/(1c)1/(1-c)-approximation for minimization cases. Finally, we give two concrete applications of our results in the settings of maximum entropy sampling, and the column-subset selection problem

    Improved Approximation Algorithms for k-Submodular Function Maximization

    Full text link
    This paper presents a polynomial-time 1/21/2-approximation algorithm for maximizing nonnegative kk-submodular functions. This improves upon the previous max{1/3,1/(1+a)}\max\{1/3, 1/(1+a)\}-approximation by Ward and \v{Z}ivn\'y~(SODA'14), where a=max{1,(k1)/4}a=\max\{1, \sqrt{(k-1)/4}\}. We also show that for monotone kk-submodular functions there is a polynomial-time k/(2k1)k/(2k-1)-approximation algorithm while for any ε>0\varepsilon>0 a ((k+1)/2k+ε)((k+1)/2k+\varepsilon)-approximation algorithm for maximizing monotone kk-submodular functions would require exponentially many queries. In particular, our hardness result implies that our algorithms are asymptotically tight. We also extend the approach to provide constant factor approximation algorithms for maximizing skew-bisubmodular functions, which were recently introduced as generalizations of bisubmodular functions
    corecore