28,394 research outputs found

    More Applications of the d-Neighbor Equivalence: Connectivity and Acyclicity Constraints

    Get PDF
    In this paper, we design a framework to obtain efficient algorithms for several problems with a global constraint (acyclicity or connectivity) such as Connected Dominating Set, Node Weighted Steiner Tree, Maximum Induced Tree, Longest Induced Path, and Feedback Vertex Set. For all these problems, we obtain 2^O(k)* n^O(1), 2^O(k log(k))* n^O(1), 2^O(k^2) * n^O(1) and n^O(k) time algorithms parameterized respectively by clique-width, Q-rank-width, rank-width and maximum induced matching width. Our approach simplifies and unifies the known algorithms for each of the parameters and match asymptotically also the running time of the best algorithms for basic NP-hard problems such as Vertex Cover and Dominating Set. Our framework is based on the d-neighbor equivalence defined in [Bui-Xuan, Telle and Vatshelle, TCS 2013]. The results we obtain highlight the importance and the generalizing power of this equivalence relation on width measures. We also prove that this equivalence relation could be useful for Max Cut: a W[1]-hard problem parameterized by clique-width. For this latter problem, we obtain n^O(k), n^O(k) and n^(2^O(k)) time algorithm parameterized by clique-width, Q-rank-width and rank-width

    The k-fixed-endpoint path partition problem

    Get PDF
    The Hamiltonian path problem is to determine whether a graph has a Hamiltonian path. This problem is NP-complete in general. The path partition problem is to determine the minimum number of vertex-disjoint paths required to cover a graph. Since this problem is a generalization of the Hamiltonian path problem, it is also NP-complete in general. The k-fixed-endpoint path partition problem is to determine the minimum number of vertex-disjoint paths required to cover a graphG such that each vertex in a set T of k vertices is an endpoint of a path. Since this problem is a generalization of the Hamiltonian path problem and path partition problem, it is also NP-complete in general. For certain classes of graphs, there exist efficient algorithms for the k-fixed-endpoint path partition problem. We consider this problem restricted to trees, threshold graphs, block graphs, and unit interval graphs and show min-max theorems which characterize the k-fixed-endpoint pathpartition number

    Approximation Algorithms for Distributionally Robust Stochastic Optimization

    Get PDF
    Two-stage stochastic optimization is a widely used framework for modeling uncertainty, where we have a probability distribution over possible realizations of the data, called scenarios, and decisions are taken in two stages: we take first-stage actions knowing only the underlying distribution and before a scenario is realized, and may take additional second-stage recourse actions after a scenario is realized. The goal is typically to minimize the total expected cost. A common criticism levied at this model is that the underlying probability distribution is itself often imprecise. To address this, an approach that is quite versatile and has gained popularity in the stochastic-optimization literature is the two-stage distributionally robust stochastic model: given a collection D of probability distributions, our goal now is to minimize the maximum expected total cost with respect to a distribution in D. There has been almost no prior work however on developing approximation algorithms for distributionally robust problems where the underlying scenario collection is discrete, as is the case with discrete-optimization problems. We provide frameworks for designing approximation algorithms in such settings when the collection D is a ball around a central distribution, defined relative to two notions of distance between probability distributions: Wasserstein metrics (which include the L_1 metric) and the L_infinity metric. Our frameworks yield efficient algorithms even in settings with an exponential number of scenarios, where the central distribution may only be accessed via a sampling oracle. For distributionally robust optimization under a Wasserstein ball, we first show that one can utilize the sample average approximation (SAA) method (solve the distributionally robust problem with an empirical estimate of the central distribution) to reduce the problem to the case where the central distribution has a polynomial-size support, and is represented explicitly. This follows because we argue that a distributionally robust problem can be reduced in a novel way to a standard two-stage stochastic problem with bounded inflation factor, which enables one to use the SAA machinery developed for two-stage stochastic problems. Complementing this, we show how to approximately solve a fractional relaxation of the SAA problem (i.e., the distributionally robust problem obtained by replacing the original central distribution with its empirical estimate). Unlike in two-stage {stochastic, robust} optimization with polynomially many scenarios, this turns out to be quite challenging. We utilize a variant of the ellipsoid method for convex optimization in conjunction with several new ideas to show that the SAA problem can be approximately solved provided that we have an (approximation) algorithm for a certain max-min problem that is akin to, and generalizes, the k-max-min problem (find the worst-case scenario consisting of at most k elements) encountered in two-stage robust optimization. We obtain such an algorithm for various discrete-optimization problems; by complementing this via rounding algorithms that provide local (i.e., per-scenario) approximation guarantees, we obtain the first approximation algorithms for the distributionally robust versions of a variety of discrete-optimization problems including set cover, vertex cover, edge cover, facility location, and Steiner tree, with guarantees that are, except for set cover, within O(1)-factors of the guarantees known for the deterministic version of the problem. For distributionally robust optimization under an L_infinity ball, we consider a fractional relaxation of the problem, and replace its objective function with a proxy function that is pointwise close to the true objective function (within a factor of 2). We then show that we can efficiently compute approximate subgradients of the proxy function, provided that we have an algorithm for the problem of computing the t worst scenarios under a given first-stage decision, given an integer t. We can then approximately minimize the proxy function via a variant of the ellipsoid method, and thus obtain an approximate solution for the fractional relaxation of the distributionally robust problem. Complementing this via rounding algorithms with local guarantees, we obtain approximation algorithms for distributionally robust versions of various covering problems, including set cover, vertex cover, edge cover, and facility location, with guarantees that are within O(1)-factors of the guarantees known for their deterministic versions

    Inapproximability of Combinatorial Optimization Problems

    Full text link
    We survey results on the hardness of approximating combinatorial optimization problems

    Theoretically Efficient Parallel Graph Algorithms Can Be Fast and Scalable

    Full text link
    There has been significant recent interest in parallel graph processing due to the need to quickly analyze the large graphs available today. Many graph codes have been designed for distributed memory or external memory. However, today even the largest publicly-available real-world graph (the Hyperlink Web graph with over 3.5 billion vertices and 128 billion edges) can fit in the memory of a single commodity multicore server. Nevertheless, most experimental work in the literature report results on much smaller graphs, and the ones for the Hyperlink graph use distributed or external memory. Therefore, it is natural to ask whether we can efficiently solve a broad class of graph problems on this graph in memory. This paper shows that theoretically-efficient parallel graph algorithms can scale to the largest publicly-available graphs using a single machine with a terabyte of RAM, processing them in minutes. We give implementations of theoretically-efficient parallel algorithms for 20 important graph problems. We also present the optimizations and techniques that we used in our implementations, which were crucial in enabling us to process these large graphs quickly. We show that the running times of our implementations outperform existing state-of-the-art implementations on the largest real-world graphs. For many of the problems that we consider, this is the first time they have been solved on graphs at this scale. We have made the implementations developed in this work publicly-available as the Graph-Based Benchmark Suite (GBBS).Comment: This is the full version of the paper appearing in the ACM Symposium on Parallelism in Algorithms and Architectures (SPAA), 201
    • …
    corecore