441 research outputs found

    Fast and Deterministic Approximations for k-Cut

    Get PDF
    In an undirected graph, a k-cut is a set of edges whose removal breaks the graph into at least k connected components. The minimum weight k-cut can be computed in n^O(k) time, but when k is treated as part of the input, computing the minimum weight k-cut is NP-Hard [Goldschmidt and Hochbaum, 1994]. For poly(m,n,k)-time algorithms, the best possible approximation factor is essentially 2 under the small set expansion hypothesis [Manurangsi, 2017]. Saran and Vazirani [1995] showed that a (2 - 2/k)-approximately minimum weight k-cut can be computed via O(k) minimum cuts, which implies a O~(km) randomized running time via the nearly linear time randomized min-cut algorithm of Karger [2000]. Nagamochi and Kamidoi [2007] showed that a (2 - 2/k)-approximately minimum weight k-cut can be computed deterministically in O(mn + n^2 log n) time. These results prompt two basic questions. The first concerns the role of randomization. Is there a deterministic algorithm for 2-approximate k-cuts matching the randomized running time of O~(km)? The second question qualitatively compares minimum cut to 2-approximate minimum k-cut. Can 2-approximate k-cuts be computed as fast as the minimum cut - in O~(m) randomized time? We give a deterministic approximation algorithm that computes (2 + eps)-minimum k-cuts in O(m log^3 n / eps^2) time, via a (1 + eps)-approximation for an LP relaxation of k-cut

    Simple Local Computation Algorithms for the General Lovasz Local Lemma

    Full text link
    We consider the task of designing Local Computation Algorithms (LCA) for applications of the Lov\'{a}sz Local Lemma (LLL). LCA is a class of sublinear algorithms proposed by Rubinfeld et al.~\cite{Ronitt} that have received a lot of attention in recent years. The LLL is an existential, sufficient condition for a collection of sets to have non-empty intersection (in applications, often, each set comprises all objects having a certain property). The ground-breaking algorithm of Moser and Tardos~\cite{MT} made the LLL fully constructive, following earlier results by Beck~\cite{beck_lll} and Alon~\cite{alon_lll} giving algorithms under significantly stronger LLL-like conditions. LCAs under those stronger conditions were given in~\cite{Ronitt}, where it was asked if the Moser-Tardos algorithm can be used to design LCAs under the standard LLL condition. The main contribution of this paper is to answer this question affirmatively. In fact, our techniques yield LCAs for settings beyond the standard LLL condition

    Simple Deterministic Approximation for Submodular Multiple Knapsack Problem

    Get PDF
    Submodular maximization has been a central topic in theoretical computer science and combinatorial optimization over the last decades. Plenty of well-performed approximation algorithms have been designed for the problem over a variety of constraints. In this paper, we consider the submodular multiple knapsack problem (SMKP). In SMKP, the profits of each subset of elements are specified by a monotone submodular function. The goal is to find a feasible packing of elements over multiple bins (knapsacks) to maximize the profit. Recently, Fairstein et al. [ESA20] proposed a nearly optimal (1-e^{-1}-?)-approximation algorithm for SMKP. Their algorithm is obtained by combining configuration LP, a grouping technique for bin packing, and the continuous greedy algorithm for submodular maximization. As a result, the algorithm is somewhat sophisticated and inherently randomized. In this paper, we present an arguably simple deterministic combinatorial algorithm for SMKP, which achieves a (1-e^{-1}-?)-approximation ratio. Our algorithm is based on very different ideas compared with Fairstein et al. [ESA20]

    Optimal Decremental Connectivity in Non-Sparse Graphs

    Get PDF

    A Constant-Factor Approximation for Weighted Bond Cover

    Get PDF
    The Weighted ?-Vertex Deletion for a class ? of graphs asks, weighted graph G, for a minimum weight vertex set S such that G-S ? ?. The case when ? is minor-closed and excludes some graph as a minor has received particular attention but a constant-factor approximation remained elusive for Weighted ?-Vertex Deletion. Only three cases of minor-closed ? are known to admit constant-factor approximations, namely Vertex Cover, Feedback Vertex Set and Diamond Hitting Set. We study the problem for the class ? of ?_c-minor-free graphs, under the equivalent setting of the Weighted c-Bond Cover problem, and present a constant-factor approximation algorithm using the primal-dual method. For this, we leverage a structure theorem implicit in [Joret et al., SIDMA\u2714] which states the following: any graph G containing a ?_c-minor-model either contains a large two-terminal protrusion, or contains a constant-size ?_c-minor-model, or a collection of pairwise disjoint constant-sized connected sets that can be contracted simultaneously to yield a dense graph. In the first case, we tame the graph by replacing the protrusion with a special-purpose weighted gadget. For the second and third case, we provide a weighting scheme which guarantees a local approximation ratio. Besides making an important step in the quest of (dis)proving a constant-factor approximation for Weighted ?-Vertex Deletion, our result may be useful as a template for algorithms for other minor-closed families

    Maximal k-Edge-Connected Subgraphs in Almost-Linear Time for Small k

    Get PDF

    Polylogarithmic Approximation Algorithm for k-Connected Directed Steiner Tree on Quasi-Bipartite Graphs

    Get PDF
    In the k-Connected Directed Steiner Tree problem (k-DST), we are given a directed graph G = (V,E) with edge (or vertex) costs, a root vertex r, a set of q terminals T, and a connectivity requirement k > 0; the goal is to find a minimum-cost subgraph H of G such that H has k edge-disjoint paths from the root r to each terminal in T. The k-DST problem is a natural generalization of the classical Directed Steiner Tree problem (DST) in the fault-tolerant setting in which the solution subgraph is required to have an r,t-path, for every terminal t, even after removing k-1 vertices or edges. Despite being a classical problem, there are not many positive results on the problem, especially for the case k ? 3. In this paper, we present an O(log k log q)-approximation algorithm for k-DST when an input graph is quasi-bipartite, i.e., when there is no edge joining two non-terminal vertices. To the best of our knowledge, our algorithm is the only known non-trivial approximation algorithm for k-DST, for k ? 3, that runs in polynomial-time Our algorithm is tight for every constant k, due to the hardness result inherited from the Set Cover problem

    Edit Distance: Sketching, Streaming and Document Exchange

    Full text link
    We show that in the document exchange problem, where Alice holds x{0,1}nx \in \{0,1\}^n and Bob holds y{0,1}ny \in \{0,1\}^n, Alice can send Bob a message of size O(K(log2K+logn))O(K(\log^2 K+\log n)) bits such that Bob can recover xx using the message and his input yy if the edit distance between xx and yy is no more than KK, and output "error" otherwise. Both the encoding and decoding can be done in time O~(n+poly(K))\tilde{O}(n+\mathsf{poly}(K)). This result significantly improves the previous communication bounds under polynomial encoding/decoding time. We also show that in the referee model, where Alice and Bob hold xx and yy respectively, they can compute sketches of xx and yy of sizes poly(Klogn)\mathsf{poly}(K \log n) bits (the encoding), and send to the referee, who can then compute the edit distance between xx and yy together with all the edit operations if the edit distance is no more than KK, and output "error" otherwise (the decoding). To the best of our knowledge, this is the first result for sketching edit distance using poly(Klogn)\mathsf{poly}(K \log n) bits. Moreover, the encoding phase of our sketching algorithm can be performed by scanning the input string in one pass. Thus our sketching algorithm also implies the first streaming algorithm for computing edit distance and all the edits exactly using poly(Klogn)\mathsf{poly}(K \log n) bits of space.Comment: Full version of an article to be presented at the 57th Annual IEEE Symposium on Foundations of Computer Science (FOCS 2016

    Stochastic Control via Entropy Compression

    Get PDF
    We consider an agent trying to bring a system to an acceptable state by repeated probabilistic action. Several recent works on algorithmizations of the Lovasz Local Lemma (LLL) can be seen as establishing sufficient conditions for the agent to succeed. Here we study whether such stochastic control is also possible in a noisy environment, where both the process of state-observation and the process of state-evolution are subject to adversarial perturbation (noise). The introduction of noise causes the tools developed for LLL algorithmization to break down since the key LLL ingredient, the sparsity of the causality (dependence) relationship, no longer holds. To overcome this challenge we develop a new analysis where entropy plays a central role, both to measure the rate at which progress towards an acceptable state is made and the rate at which noise undoes this progress. The end result is a sufficient condition that allows a smooth tradeoff between the intensity of the noise and the amenability of the system, recovering an asymmetric LLL condition in the noiseless case.Comment: 18 page

    On Closest Pair in Euclidean Metric: Monochromatic is as Hard as Bichromatic

    Get PDF
    Given a set of n points in R^d, the (monochromatic) Closest Pair problem asks to find a pair of distinct points in the set that are closest in the l_p-metric. Closest Pair is a fundamental problem in Computational Geometry and understanding its fine-grained complexity in the Euclidean metric when d=omega(log n) was raised as an open question in recent works (Abboud-Rubinstein-Williams [FOCS\u2717], Williams [SODA\u2718], David-Karthik-Laekhanukit [SoCG\u2718]). In this paper, we show that for every p in R_{>= 1} cup {0}, under the Strong Exponential Time Hypothesis (SETH), for every epsilon>0, the following holds: - No algorithm running in time O(n^{2-epsilon}) can solve the Closest Pair problem in d=(log n)^{Omega_{epsilon}(1)} dimensions in the l_p-metric. - There exists delta = delta(epsilon)>0 and c = c(epsilon)>= 1 such that no algorithm running in time O(n^{1.5-epsilon}) can approximate Closest Pair problem to a factor of (1+delta) in d >= c log n dimensions in the l_p-metric. In particular, our first result is shown by establishing the computational equivalence of the bichromatic Closest Pair problem and the (monochromatic) Closest Pair problem (up to n^{epsilon} factor in the running time) for d=(log n)^{Omega_epsilon(1)} dimensions. Additionally, under SETH, we rule out nearly-polynomial factor approximation algorithms running in subquadratic time for the (monochromatic) Maximum Inner Product problem where we are given a set of n points in n^{o(1)}-dimensional Euclidean space and are required to find a pair of distinct points in the set that maximize the inner product. At the heart of all our proofs is the construction of a dense bipartite graph with low contact dimension, i.e., we construct a balanced bipartite graph on n vertices with n^{2-epsilon} edges whose vertices can be realized as points in a (log n)^{Omega_epsilon(1)}-dimensional Euclidean space such that every pair of vertices which have an edge in the graph are at distance exactly 1 and every other pair of vertices are at distance greater than 1. This graph construction is inspired by the construction of locally dense codes introduced by Dumer-Miccancio-Sudan [IEEE Trans. Inf. Theory\u2703]
    corecore