325 research outputs found

    Subdeterminant Maximization via Nonconvex Relaxations and Anti-concentration

    Full text link
    Several fundamental problems that arise in optimization and computer science can be cast as follows: Given vectors v1,…,vm∈Rdv_1,\ldots,v_m \in \mathbb{R}^d and a constraint family B⊆2[m]{\cal B}\subseteq 2^{[m]}, find a set S∈BS \in \cal{B} that maximizes the squared volume of the simplex spanned by the vectors in SS. A motivating example is the data-summarization problem in machine learning where one is given a collection of vectors that represent data such as documents or images. The volume of a set of vectors is used as a measure of their diversity, and partition or matroid constraints over [m][m] are imposed in order to ensure resource or fairness constraints. Recently, Nikolov and Singh presented a convex program and showed how it can be used to estimate the value of the most diverse set when B{\cal B} corresponds to a partition matroid. This result was recently extended to regular matroids in works of Straszak and Vishnoi, and Anari and Oveis Gharan. The question of whether these estimation algorithms can be converted into the more useful approximation algorithms -- that also output a set -- remained open. The main contribution of this paper is to give the first approximation algorithms for both partition and regular matroids. We present novel formulations for the subdeterminant maximization problem for these matroids; this reduces them to the problem of finding a point that maximizes the absolute value of a nonconvex function over a Cartesian product of probability simplices. The technical core of our results is a new anti-concentration inequality for dependent random variables that allows us to relate the optimal value of these nonconvex functions to their value at a random point. Unlike prior work on the constrained subdeterminant maximization problem, our proofs do not rely on real-stability or convexity and could be of independent interest both in algorithms and complexity.Comment: in FOCS 201

    Maximization of Non-Monotone Submodular Functions

    Get PDF
    A litany of questions from a wide variety of scientific disciplines can be cast as non-monotone submodular maximization problems. Since this class of problems includes max-cut, it is NP-hard. Thus, general purpose algorithms for the class tend to be approximation algorithms. For unconstrained problem instances, one recent innovation in this vein includes an algorithm of Buchbinder et al. (2012) that guarantees a ½ - approximation to the maximum. Building on this, for problems subject to cardinality constraints, Buchbinderet al. (2014) o_er guarantees in the range [0:356; ½ + o(1)]. Earlier work has the best approximation factors for more complex constraints and settings. For constraints that can be characterized as a solvable polytope, Chekuri et al. (2011) provide guarantees. For the online secretary setting, Gupta et al. (2010) provide guarantees. In sum, the current body of work on non-monotone submodular maximization lays strong foundations. However, there remains ample room for future algorithm development

    Differentially Private Decomposable Submodular Maximization

    Full text link
    We study the problem of differentially private constrained maximization of decomposable submodular functions. A submodular function is decomposable if it takes the form of a sum of submodular functions. The special case of maximizing a monotone, decomposable submodular function under cardinality constraints is known as the Combinatorial Public Projects (CPP) problem [Papadimitriou et al., 2008]. Previous work by Gupta et al. [2010] gave a differentially private algorithm for the CPP problem. We extend this work by designing differentially private algorithms for both monotone and non-monotone decomposable submodular maximization under general matroid constraints, with competitive utility guarantees. We complement our theoretical bounds with experiments demonstrating empirical performance, which improves over the differentially private algorithms for the general case of submodular maximization and is close to the performance of non-private algorithms
    • …
    corecore