9,255 research outputs found

    A QPTAS for Maximum Weight Independent Set of Polygons with Polylogarithmically Many Vertices

    Full text link
    The Maximum Weight Independent Set of Polygons problem is a fundamental problem in computational geometry. Given a set of weighted polygons in the 2-dimensional plane, the goal is to find a set of pairwise non-overlapping polygons with maximum total weight. Due to its wide range of applications, the MWISP problem and its special cases have been extensively studied both in the approximation algorithms and the computational geometry community. Despite a lot of research, its general case is not well-understood. Currently the best known polynomial time algorithm achieves an approximation ratio of n^(epsilon) [Fox and Pach, SODA 2011], and it is not even clear whether the problem is APX-hard. We present a (1+epsilon)-approximation algorithm, assuming that each polygon in the input has at most a polylogarithmic number of vertices. Our algorithm has quasi-polynomial running time. We use a recently introduced framework for approximating maximum weight independent set in geometric intersection graphs. The framework has been used to construct a QPTAS in the much simpler case of axis-parallel rectangles. We extend it in two ways, to adapt it to our much more general setting. First, we show that its technical core can be reduced to the case when all input polygons are triangles. Secondly, we replace its key technical ingredient which is a method to partition the plane using only few edges such that the objects stemming from the optimal solution are evenly distributed among the resulting faces and each object is intersected only a few times. Our new procedure for this task is not more complex than the original one, and it can handle the arising difficulties due to the arbitrary angles of the polygons. Note that already this obstacle makes the known analysis for the above framework fail. Also, in general it is not well understood how to handle this difficulty by efficient approximation algorithms

    Optimality program in segment and string graphs

    Full text link
    Planar graphs are known to allow subexponential algorithms running in time 2O(n)2^{O(\sqrt n)} or 2O(nlogn)2^{O(\sqrt n \log n)} for most of the paradigmatic problems, while the brute-force time 2Θ(n)2^{\Theta(n)} is very likely to be asymptotically best on general graphs. Intrigued by an algorithm packing curves in 2O(n2/3logn)2^{O(n^{2/3}\log n)} by Fox and Pach [SODA'11], we investigate which problems have subexponential algorithms on the intersection graphs of curves (string graphs) or segments (segment intersection graphs) and which problems have no such algorithms under the ETH (Exponential Time Hypothesis). Among our results, we show that, quite surprisingly, 3-Coloring can also be solved in time 2O(n2/3logO(1)n)2^{O(n^{2/3}\log^{O(1)}n)} on string graphs while an algorithm running in time 2o(n)2^{o(n)} for 4-Coloring even on axis-parallel segments (of unbounded length) would disprove the ETH. For 4-Coloring of unit segments, we show a weaker ETH lower bound of 2o(n2/3)2^{o(n^{2/3})} which exploits the celebrated Erd\H{o}s-Szekeres theorem. The subexponential running time also carries over to Min Feedback Vertex Set but not to Min Dominating Set and Min Independent Dominating Set.Comment: 19 pages, 15 figure

    Polynomial time algorithms for multicast network code construction

    Get PDF
    The famous max-flow min-cut theorem states that a source node s can send information through a network (V, E) to a sink node t at a rate determined by the min-cut separating s and t. Recently, it has been shown that this rate can also be achieved for multicasting to several sinks provided that the intermediate nodes are allowed to re-encode the information they receive. We demonstrate examples of networks where the achievable rates obtained by coding at intermediate nodes are arbitrarily larger than if coding is not allowed. We give deterministic polynomial time algorithms and even faster randomized algorithms for designing linear codes for directed acyclic graphs with edges of unit capacity. We extend these algorithms to integer capacities and to codes that are tolerant to edge failures

    Indexability, concentration, and VC theory

    Get PDF
    Degrading performance of indexing schemes for exact similarity search in high dimensions has long since been linked to histograms of distributions of distances and other 1-Lipschitz functions getting concentrated. We discuss this observation in the framework of the phenomenon of concentration of measure on the structures of high dimension and the Vapnik-Chervonenkis theory of statistical learning.Comment: 17 pages, final submission to J. Discrete Algorithms (an expanded, improved and corrected version of the SISAP'2010 invited paper, this e-print, v3
    corecore