51 research outputs found

    Exact Algorithms for Maximum Independent Set

    Get PDF
    We show that the maximum independent set problem (MIS) on an nn-vertex graph can be solved in 1.1996nnO(1)1.1996^nn^{O(1)} time and polynomial space, which even is faster than Robson's 1.2109nnO(1)1.2109^{n}n^{O(1)}-time exponential-space algorithm published in 1986. We also obtain improved algorithms for MIS in graphs with maximum degree 6 and 7, which run in time of 1.1893nnO(1)1.1893^nn^{O(1)} and 1.1970nnO(1)1.1970^nn^{O(1)}, respectively. Our algorithms are obtained by using fast algorithms for MIS in low-degree graphs in a hierarchical way and making a careful analyses on the structure of bounded-degree graphs

    Solving Vertex Cover in Polynomial Time on Hyperbolic Random Graphs

    Get PDF
    The VertexCover problem is proven to be computationally hard in different ways: It is NP-complete to find an optimal solution and even NP-hard to find an approximation with reasonable factors. In contrast, recent experiments suggest that on many real-world networks the run time to solve VertexCover is way smaller than even the best known FPT-approaches can explain. Similarly, greedy algorithms deliver very good approximations to the optimal solution in practice. We link these observations to two properties that are observed in many real-world networks, namely a heterogeneous degree distribution and high clustering. To formalize these properties and explain the observed behavior, we analyze how a branch-and-reduce algorithm performs on hyperbolic random graphs, which have become increasingly popular for modeling real-world networks. In fact, we are able to show that the VertexCover problem on hyperbolic random graphs can be solved in polynomial time, with high probability. The proof relies on interesting structural properties of hyperbolic random graphs. Since these predictions of the model are interesting in their own right, we conducted experiments on real-world networks showing that these properties are also observed in practice. When utilizing the same structural properties in an adaptive greedy algorithm, further experiments suggest that, on real instances, this leads to better approximations than the standard greedy approach within reasonable time

    Scalable Kernelization for Maximum Independent Sets

    Get PDF
    The most efficient algorithms for finding maximum independent sets in both theory and practice use reduction rules to obtain a much smaller problem instance called a kernel. The kernel can then be solved quickly using exact or heuristic algorithms---or by repeatedly kernelizing recursively in the branch-and-reduce paradigm. It is of critical importance for these algorithms that kernelization is fast and returns a small kernel. Current algorithms are either slow but produce a small kernel, or fast and give a large kernel. We attempt to accomplish both of these goals simultaneously, by giving an efficient parallel kernelization algorithm based on graph partitioning and parallel bipartite maximum matching. We combine our parallelization techniques with two techniques to accelerate kernelization further: dependency checking that prunes reductions that cannot be applied, and reduction tracking that allows us to stop kernelization when reductions become less fruitful. Our algorithm produces kernels that are orders of magnitude smaller than the fastest kernelization methods, while having a similar execution time. Furthermore, our algorithm is able to compute kernels with size comparable to the smallest known kernels, but up to two orders of magnitude faster than previously possible. Finally, we show that our kernelization algorithm can be used to accelerate existing state-of-the-art heuristic algorithms, allowing us to find larger independent sets faster on large real-world networks and synthetic instances.Comment: Extended versio

    On the Equivalence among Problems of Bounded Width

    Full text link
    In this paper, we introduce a methodology, called decomposition-based reductions, for showing the equivalence among various problems of bounded-width. First, we show that the following are equivalent for any α>0\alpha > 0: * SAT can be solved in O∗(2αtw)O^*(2^{\alpha \mathrm{tw}}) time, * 3-SAT can be solved in O∗(2αtw)O^*(2^{\alpha \mathrm{tw}}) time, * Max 2-SAT can be solved in O∗(2αtw)O^*(2^{\alpha \mathrm{tw}}) time, * Independent Set can be solved in O∗(2αtw)O^*(2^{\alpha \mathrm{tw}}) time, and * Independent Set can be solved in O∗(2αcw)O^*(2^{\alpha \mathrm{cw}}) time, where tw and cw are the tree-width and clique-width of the instance, respectively. Then, we introduce a new parameterized complexity class EPNL, which includes Set Cover and Directed Hamiltonicity, and show that SAT, 3-SAT, Max 2-SAT, and Independent Set parameterized by path-width are EPNL-complete. This implies that if one of these EPNL-complete problems can be solved in O∗(ck)O^*(c^k) time, then any problem in EPNL can be solved in O∗(ck)O^*(c^k) time.Comment: accepted to ESA 201

    A note on the independence number, domination number and related parameters of random binary search trees and random recursive trees

    Full text link
    We identify the mean growth of the independence number of random binary search trees and random recursive trees and show normal fluctuations around their means. Similarly we also show normal limit laws for the domination number and variations of it for these two cases of random tree models. Our results are an application of a recent general theorem of Holmgren and Janson on fringe trees in these two random tree models

    Exact Algorithms via Multivariate Subroutines

    Get PDF
    We consider the family of Phi-Subset problems, where the input consists of an instance I of size N over a universe U_I of size n and the task is to check whether the universe contains a subset with property Phi (e.g., Phi could be the property of being a feedback vertex set for the input graph of size at most k). Our main tool is a simple randomized algorithm which solves Phi-Subset in time (1+b-(1/c))^n N^(O(1)), provided that there is an algorithm for the Phi-Extension problem with running time b^{n-|X|} c^k N^{O(1)}. Here, the input for Phi-Extension is an instance I of size N over a universe U_I of size n, a subset X subseteq U_I, and an integer k, and the task is to check whether there is a set Y with X subseteq Y subseteq U_I and |Y X| <= k with property Phi. We derandomize this algorithm at the cost of increasing the running time by a subexponential factor in n, and we adapt it to the enumeration setting where we need to enumerate all subsets of the universe with property Phi. This generalizes the results of Fomin et al. [STOC 2016] who proved the case where b=1. As case studies, we use these results to design faster deterministic algorithms for: - checking whether a graph has a feedback vertex set of size at most k - enumerating all minimal feedback vertex sets - enumerating all minimal vertex covers of size at most k, and - enumerating all minimal 3-hitting sets. We obtain these results by deriving new b^{n-|X|} c^k N^{O(1)}-time algorithms for the corresponding Phi-Extension problems (or enumeration variant). In some cases, this is done by adapting the analysis of an existing algorithm, or in other cases by designing a new algorithm. Our analyses are based on Measure and Conquer, but the value to minimize, 1+b-(1/c), is unconventional and requires non-convex optimization
    • …
    corecore