211,556 research outputs found

    Optimally Repurposing Existing Algorithms to Obtain Exponential-Time Approximations

    Full text link
    The goal of this paper is to understand how exponential-time approximation algorithms can be obtained from existing polynomial-time approximation algorithms, existing parameterized exact algorithms, and existing parameterized approximation algorithms. More formally, we consider a monotone subset minimization problem over a universe of size nn (e.g., Vertex Cover or Feedback Vertex Set). We have access to an algorithm that finds an Ī±\alpha-approximate solution in time ckā‹…nO(1)c^k \cdot n^{O(1)} if a solution of size kk exists (and more generally, an extension algorithm that can approximate in a similar way if a set can be extended to a solution with kk further elements). Our goal is to obtain a dnā‹…nO(1)d^n \cdot n^{O(1)} time Ī²\beta-approximation algorithm for the problem with dd as small as possible. That is, for every fixed Ī±,c,Ī²ā‰„1\alpha,c,\beta \geq 1, we would like to determine the smallest possible dd that can be achieved in a model where our problem-specific knowledge is limited to checking the feasibility of a solution and invoking the Ī±\alpha-approximate extension algorithm. Our results completely resolve this question: (1) For every fixed Ī±,c,Ī²ā‰„1\alpha,c,\beta \geq 1, a simple algorithm (``approximate monotone local search'') achieves the optimum value of dd. (2) Given Ī±,c,Ī²ā‰„1\alpha,c,\beta \geq 1, we can efficiently compute the optimum dd up to any precision Īµ>0\varepsilon > 0. Earlier work presented algorithms (but no lower bounds) for the special case Ī±=Ī²=1\alpha = \beta = 1 [Fomin et al., J. ACM 2019] and for the special case Ī±=Ī²>1\alpha = \beta > 1 [Esmer et al., ESA 2022]. Our work generalizes these results and in particular confirms that the earlier algorithms are optimal in these special cases.Comment: 80 pages, 5 figure

    Vertex Sparsifiers: New Results from Old Techniques

    Get PDF
    Given a capacitated graph G=(V,E)G = (V,E) and a set of terminals KāŠ†VK \subseteq V, how should we produce a graph HH only on the terminals KK so that every (multicommodity) flow between the terminals in GG could be supported in HH with low congestion, and vice versa? (Such a graph HH is called a flow-sparsifier for GG.) What if we want HH to be a "simple" graph? What if we allow HH to be a convex combination of simple graphs? Improving on results of Moitra [FOCS 2009] and Leighton and Moitra [STOC 2010], we give efficient algorithms for constructing: (a) a flow-sparsifier HH that maintains congestion up to a factor of O(logā”k/logā”logā”k)O(\log k/\log \log k), where k=āˆ£Kāˆ£k = |K|, (b) a convex combination of trees over the terminals KK that maintains congestion up to a factor of O(logā”k)O(\log k), and (c) for a planar graph GG, a convex combination of planar graphs that maintains congestion up to a constant factor. This requires us to give a new algorithm for the 0-extension problem, the first one in which the preimages of each terminal are connected in GG. Moreover, this result extends to minor-closed families of graphs. Our improved bounds immediately imply improved approximation guarantees for several terminal-based cut and ordering problems.Comment: An extended abstract appears in the 13th International Workshop on Approximation Algorithms for Combinatorial Optimization Problems (APPROX), 2010. Final version to appear in SIAM J. Computin

    Subexponential LPs Approximate Max-Cut

    Full text link
    We show that for every Īµ>0\varepsilon > 0, the degree-nĪµn^\varepsilon Sherali-Adams linear program (with expā”(O~(nĪµ))\exp(\tilde{O}(n^\varepsilon)) variables and constraints) approximates the maximum cut problem within a factor of (12+Īµā€²)(\frac{1}{2}+\varepsilon'), for some Īµā€²(Īµ)>0\varepsilon'(\varepsilon) > 0. Our result provides a surprising converse to known lower bounds against all linear programming relaxations of Max-Cut, and hence resolves the extension complexity of approximate Max-Cut for approximation factors close to 12\frac{1}{2} (up to the function Īµā€²(Īµ)\varepsilon'(\varepsilon)). Previously, only semidefinite programs and spectral methods were known to yield approximation factors better than 12\frac 12 for Max-Cut in time 2o(n)2^{o(n)}. We also show that constant-degree Sherali-Adams linear programs (with poly(n)\text{poly}(n) variables and constraints) can solve Max-Cut with approximation factor close to 11 on graphs of small threshold rank: this is the first connection of which we are aware between threshold rank and linear programming-based algorithms. Our results separate the power of Sherali-Adams versus Lov\'asz-Schrijver hierarchies for approximating Max-Cut, since it is known that (12+Īµ)(\frac{1}{2}+\varepsilon) approximation of Max Cut requires Ī©Īµ(n)\Omega_\varepsilon (n) rounds in the Lov\'asz-Schrijver hierarchy. We also provide a subexponential time approximation for Khot's Unique Games problem: we show that for every Īµ>0\varepsilon > 0 the degree-(nĪµlogā”q)(n^\varepsilon \log q) Sherali-Adams linear program distinguishes instances of Unique Games of value ā‰„1āˆ’Īµā€²\geq 1-\varepsilon' from instances of value ā‰¤Īµā€²\leq \varepsilon', for some Īµā€²(Īµ)>0\varepsilon'( \varepsilon) >0, where qq is the alphabet size. Such guarantees are qualitatively similar to those of previous subexponential-time algorithms for Unique Games but our algorithm does not rely on semidefinite programming or subspace enumeration techniques

    Optimal Approximation for Submodular and Supermodular Optimization with Bounded Curvature

    Get PDF
    We design new approximation algorithms for the problems of optimizing submodular and supermodular functions subject to a single matroid constraint. Specifically, we consider the case in which we wish to maximize a monotone increasing submodular function or minimize a monotone decreasing supermodular function with a bounded total curvature c. Intuitively, the parameter c represents how nonlinear a function f is: when c = 0, f is linear, while for c = 1, f may be an arbitrary monotone increasing submodular function. For the case of submodular maximization with total curvature c, we obtain a (1 āˆ’ c/e)-approximationā€”the first improvement over the greedy algorithm of of Conforti and CornuĆ©jols from 1984, which holds for a cardinality constraint, as well as a recent analogous result for an arbitrary matroid constraint. Our approach is based on modifications of the continuous greedy algorithm and nonoblivious local search, and allows us to approximately maximize the sum of a nonnegative, monotone increasing submodular function and a (possibly negative) linear function. We show how to reduce both submodular maximization and supermodular minimization to this general problem when the objective function has bounded total curvature. We prove that the approximation results we obtain are the best possible in the value oracle model, even in the case of a cardinality constraint. We define an extension of the notion of curvature to general monotone set functions and show a (1 āˆ’ c)-approximation for maximization and a 1/(1 āˆ’ c)-approximation for minimization cases. Finally, we give two concrete applications of our results in the settings of maximum entropy sampling, and the column-subset selection problem

    Submodular Maximization Meets Streaming: Matchings, Matroids, and More

    Full text link
    We study the problem of finding a maximum matching in a graph given by an input stream listing its edges in some arbitrary order, where the quantity to be maximized is given by a monotone submodular function on subsets of edges. This problem, which we call maximum submodular-function matching (MSM), is a natural generalization of maximum weight matching (MWM), which is in turn a generalization of maximum cardinality matching (MCM). We give two incomparable algorithms for this problem with space usage falling in the semi-streaming range---they store only O(n)O(n) edges, using O(nlogā”n)O(n\log n) working memory---that achieve approximation ratios of 7.757.75 in a single pass and (3+Ļµ)(3+\epsilon) in O(Ļµāˆ’3)O(\epsilon^{-3}) passes respectively. The operations of these algorithms mimic those of Zelke's and McGregor's respective algorithms for MWM; the novelty lies in the analysis for the MSM setting. In fact we identify a general framework for MWM algorithms that allows this kind of adaptation to the broader setting of MSM. In the sequel, we give generalizations of these results where the maximization is over "independent sets" in a very general sense. This generalization captures hypermatchings in hypergraphs as well as independence in the intersection of multiple matroids.Comment: 18 page
    • ā€¦
    corecore