29,643 research outputs found

    More Applications of the d-Neighbor Equivalence: Connectivity and Acyclicity Constraints

    Get PDF
    In this paper, we design a framework to obtain efficient algorithms for several problems with a global constraint (acyclicity or connectivity) such as Connected Dominating Set, Node Weighted Steiner Tree, Maximum Induced Tree, Longest Induced Path, and Feedback Vertex Set. For all these problems, we obtain 2^O(k)* n^O(1), 2^O(k log(k))* n^O(1), 2^O(k^2) * n^O(1) and n^O(k) time algorithms parameterized respectively by clique-width, Q-rank-width, rank-width and maximum induced matching width. Our approach simplifies and unifies the known algorithms for each of the parameters and match asymptotically also the running time of the best algorithms for basic NP-hard problems such as Vertex Cover and Dominating Set. Our framework is based on the d-neighbor equivalence defined in [Bui-Xuan, Telle and Vatshelle, TCS 2013]. The results we obtain highlight the importance and the generalizing power of this equivalence relation on width measures. We also prove that this equivalence relation could be useful for Max Cut: a W[1]-hard problem parameterized by clique-width. For this latter problem, we obtain n^O(k), n^O(k) and n^(2^O(k)) time algorithm parameterized by clique-width, Q-rank-width and rank-width

    ETH-Tight Algorithms for Long Path and Cycle on Unit Disk Graphs

    Get PDF
    We present an algorithm for the extensively studied Long Path and Long Cycle problems on unit disk graphs that runs in time 2^{?(?k)}(n+m). Under the Exponential Time Hypothesis, Long Path and Long Cycle on unit disk graphs cannot be solved in time 2^{o(?k)}(n+m)^?(1) [de Berg et al., STOC 2018], hence our algorithm is optimal. Besides the 2^{?(?k)}(n+m)^?(1)-time algorithm for the (arguably) much simpler Vertex Cover problem by de Berg et al. [STOC 2018] (which easily follows from the existence of a 2k-vertex kernel for the problem), this is the only known ETH-optimal fixed-parameter tractable algorithm on UDGs. Previously, Long Path and Long Cycle on unit disk graphs were only known to be solvable in time 2^{?(?klog k)}(n+m). This algorithm involved the introduction of a new type of a tree decomposition, entailing the design of a very tedious dynamic programming procedure. Our algorithm is substantially simpler: we completely avoid the use of this new type of tree decomposition. Instead, we use a marking procedure to reduce the problem to (a weighted version of) itself on a standard tree decomposition of width ?(?k)

    Determinant Sums for Undirected Hamiltonicity

    Full text link
    We present a Monte Carlo algorithm for Hamiltonicity detection in an nn-vertex undirected graph running in O(1.657n)O^*(1.657^{n}) time. To the best of our knowledge, this is the first superpolynomial improvement on the worst case runtime for the problem since the O(2n)O^*(2^n) bound established for TSP almost fifty years ago (Bellman 1962, Held and Karp 1962). It answers in part the first open problem in Woeginger's 2003 survey on exact algorithms for NP-hard problems. For bipartite graphs, we improve the bound to O(1.414n)O^*(1.414^{n}) time. Both the bipartite and the general algorithm can be implemented to use space polynomial in nn. We combine several recently resurrected ideas to get the results. Our main technical contribution is a new reduction inspired by the algebraic sieving method for kk-Path (Koutis ICALP 2008, Williams IPL 2009). We introduce the Labeled Cycle Cover Sum in which we are set to count weighted arc labeled cycle covers over a finite field of characteristic two. We reduce Hamiltonicity to Labeled Cycle Cover Sum and apply the determinant summation technique for Exact Set Covers (Bj\"orklund STACS 2010) to evaluate it.Comment: To appear at IEEE FOCS 201

    Eth-tight algorithms for long path and cycle on unit disk graphs

    Get PDF
    We present an algorithm for the extensively studied Long Path and Long Cycle problems on unit disk graphs that runs in time 2O(√k)(n + m). Under the Exponential Time Hypothesis, Long Path and Long Cycle on unit disk graphs cannot be solved in time 2o(√k)(n + m)O(1) [de Berg et al., STOC 2018], hence our algorithm is optimal. Besides the 2O(√k)(n + m)O(1)-time algorithm for the (arguably) much simpler Vertex Cover problem by de Berg et al. [STOC 2018] (which easily follows from the existence of a 2k-vertex kernel for the problem), this is the only known ETH-optimal fixed-parameter tractable algorithm on UDGs. Previously, Long Path and Long Cycle on unit disk graphs were only known to be solvable in time 2O(√k log k)(n + m). This algorithm involved the introduction of a new type of a tree decomposition, entailing the design of a very tedious dynamic programming procedure. Our algorithm is substantially simpler: we completely avoid the use of this new type of tree decomposition. Instead, we use a marking procedure to reduce the problem to (a weighted version of) itself on a standard tree decomposition of width O(√k). © Fedor V. Fomin, Daniel Lokshtanov, Fahad Panolan, Saket Saurabh, and Meirav Zehavi; licensed under Creative Commons License CC-BY 36th International Symposium on Computational Geometry (SoCG 2020)

    グラフ上の分割問題と被覆問題:計算量解析とアルゴリズム設計

    Get PDF
    This dissertation studies four combinatorial optimization problems on graphs: (1) Minimum Block Transfer problem (MBT for short), (2) Maximum k-Path Vertex Cover problem (MaxPkVC for short), (3) k-Path Vertex Cover Reconfiguration problem (k- PVCR for short), and (4) Minimum (Maximum) Weighted Path Cover problem (MinPC (MaxPC) for short). This dissertation provides hardness results, such as NP-hardness and inapproximabilities, and polynomial-time algorithms for each problem. In Chapter 2, we study MBT. Let G = (V, A) be a simple directed acyclic graph, i.e., G does not include any cycles, any multiple arcs, or any self-loops, with a node set V and an arc set A. Given a DAG G and a block size B, the objective of MBT is to find a partition of its node set such that it satisfies the following two conditions: (i) Each element (called a block) of the partition has a size which is at most B, and (ii) the maximum number of external arcs among directed paths from the roots to the leaves is minimized. The number of external arcs is defined as the number of arcs connecting two distinct blocks, that is, the number denotes the number of block transfers. The height of a DAG is defined as the length of the longest directed paths from its roots to the leaves. Let us consider the two-level I/O model for data transfers between an external memory with a large space and an internal memory with a limited space. Assume that the external memory is divided into fixed contiguous blocks of size B, and one query or modification transfers one block of B objects from the external memory to the internal one. Then, with our MBT problem, we can consider the efficient way to store data in the external memory such that the maximum number of data transfers between the external memory and the internal one is minimized. We first revisit the previous, naive bottom-up packing algorithm for MBT and show that its approximation ratio is 2 if B = 2. Additionally, we show that the approximation ratio of that algorithm is at least B if B gets larger. Next, we explicitly show that MBT is NP-hard even if each block size B is at most two and the height of DAGs is three, and maximum indegree and outdegree of a node are two and three, respectively. Our proof of the NP-hardness also shows that, if B = 2 and P 6= NP, MBT does not admit any polynomial-time (3=2 - ε)- approximation ((4/3 - ε)-approximation, resp.) algorithm for any ε > 0 even if the input is restricted to DAGs of height at most five (at least six, resp.). Fortunately, however, we can obtain a linear time exact algorithm if the height of DAGs is bounded above by two. Also, for MBT with B = 2, we provide the following linear-time algorithms: A simple 2-approximation algorithm and improved (2 - ε)-approximation algorithms, where ε = 2/h and ε = 2/(h + 1) for the case where the height of the input DAGs is even and odd, respectively. If h = 3, the last algorithm achieves a 3/2-approximation ratio, matching the inapproximability. In Chapter 3, we study MaxPkVC. Let G = (V, E) be a simple undirected graph, where V and E denote the set of vertices and the set of edges, respectively. A path of length k - 1 is called a k-path. If a k-path Pk contains a vertex v in a vertex set S, then we say that the vertex v or the set S covers Pk. Given a graph G and an integer s, the goal of MaxPkVC is to find a vertex subset S of size at most s such that the number of k-paths covered by S is maximized. Given a graph G, MinPkVC problem, a minimization version of MaxPkVC, is to find a minimum vertex subset of G such that it covers all the k-paths of G. A great focus has been on MinPkVC since it was introduced in 2011, and it is known that MinPkVC has an application for maintaining the security of a network. MinVC is a classical, very famous problem in this field such that it seeks to find a minimum vertex subset to cover all the 2-paths, i.e., the edges of the graph. Also, its maximization version, MaxVC, is well studied. One can see that MaxPkVC is a generalized problem of MaxVC since MaxVC is a special case of MaxPkVC, in the case where k = 2. MaxPkVC, for example, has an application when we would like to cover as many areas as possible with a restricted amount of budget. First, we show that MaxP3VC (MaxP4VC, resp.) is NP-hard on split graphs (chordal graphs, resp.). Then, we show that MaxP3VC is in FPT with respect to the combined parameter s + tw, where s and tw are the prescribed size of 3-path vertex cover and treewidth parameter, respectively. Treewidth is a well-known graph parameter, and it defines a tree-likeness of a graph; see Chapter 3. Our algorithm runs in O((s + 1)2tw+4 ・ 4tw・n)-time, where |V| = n. In Chapter 4, we discuss k-PVCR. Let G = (V, E) be a simple graph. In a reconfiguration setting, two feasible solutions of a computational problem are given, along with a reconfiguration rule that describes an adjacency relation between solutions. A reconfiguration problem asks if one feasible solution can be transformed into the other via a sequence of adjacent feasible solutions where each intermediate member is obtained from its predecessor by applying the given reconfiguration rule exactly once. Such a sequence is called a reconfiguration sequence, if it exists. For any fixed integers k ≥ 2, given two distinct k-path vertex covers I and J of a graph G and a single reconfiguration rule, the goal of k-PVCR is to determine if there is a reconfiguration sequence between I and J. For the reconfiguration rule, we consider the following three well-known rules: Token Sliding (TS), Token Jumping (TJ), and Token Addition or Removal (TAR). For the precise descriptions of each rule, refer to Chapter 4. The reconfiguration variant of MinVC (called VCR) has been well studied; the goal of our study is to find the difference between VCR and k-PVCR, such as the difference of the computational complexity on graph subclasses, and to design polynomial-time algorithms. We can again see that k-PVCR is a generalized problem of VCR, since VCR is a special case of k-PVCR if k = 2. First, we confirm that several hardness results for VCR remain true for k-PVCR; we show the PSPACE-completeness of k-PVCR on general graphs under each rule TS, TJ, and TAR using a reduction from a variant of VCR. As our reduction preserves some nice graph properties, we claim that the hardness results for VCR on several graphs (planar graphs, bounded bandwidth graphs, chordal graphs, bipartite graphs) can be converted into those for k-PVCR. Using another reduction, we moreover show that k-PVCR remains PSPACE-complete even on planar graphs of bounded bandwith and maximum degree 3. On the other hand, we design polynomial-time algorithms for k-PVCR on trees (under each of TJ and TAR), paths and cycles (under each reconfiguration rule). Furthermore, on paths, our algorithm constructs a shortest reconfiguration sequence. In Chapter 5, we investigate MinPC (MaxPC), especially the (in)tractabilities of MinPC. Given a graph G = (V, E), a collection P of vertex disjoint paths is called a path cover on G if every vertex v ⋲ V is in exactly one path of P. The goal of path cover problem (PC for short) is to find a path cover with the minimum number of paths on G. As a generalized variant of PC, we introduce MinPC (MaxPC) as follows: Let U = {0, 1,...,n-1} denote a set of path lengths. Given a graph G = (V, E) and a cost (profit) function f : U → R ⋃ {+∞, -∞}, which defines a cost (profit) for each path in its length, find a path cover P of G such that the total cost (profit) of the paths in P is minimized (maximized). Let L be a subset of U. We denote the set of paths of length l ⋲ L as PL. We, especially, consider MinPC whose cost function is f(l) = 1 if l ⋲ L; otherwise f(l) = 0. The problem is denoted by MinPLPC and is to find a path cover with the minimum number of paths with length l ⋲ L. We can also define the problem MaxPLPC with f(l) = l + 1, if l ⋲ L, and f(l) = 0, otherwise. Note that several classical problems can be seen as special cases of MinPC or MaxPC. For example, Hamiltonian Path Problem (to seek a single path visiting every vertex exactly once) and Maximum Matching Problem are equivalent to MinP{n-1}PC and MaxP{1}PC, respectively. It is known that MinP{0}PC and MinP{0, 1}PC with the same cost function as ours can be solved in polynomial time. First, we show that MinP{0, 1, 2}PC is NP-hard on planar bipartite graphs with maximum degree three, reduced from Planar 3-SAT. Our reduction also shows that there exist no approximation algorithms for MinP{0, 1, 2}PC unless P = NP. As a positive result, we show that MinP{0,...,k}PC for any fixed integers k can be solved in polynomial time on graphs with bounded treewidth. Specifically, our algorithm runs in O(42W ・W2W+2 ・ (k + 2)2W+2 ・ n)-time, assuming we are given an n-vertex graph of width at most W with its tree decomposition. Finally, a conclusion of this dissertation and open problems are given in Chapter 6.九州工業大学博士学位論文 学位記番号:情工博甲第355号 学位授与年月日:令和3年3月25日1 Introduction|2 Minimum Block Transfer problem|3 Maximum k-Path Vertex Cover problem|4 k-Path Vertex Cover Reconfiguration problem|5 Minimum (Maximum) Weighted Path Cover problem|6 Conclusion and Open Problems九州工業大学令和2年

    Problème de k-Séparateur

    Get PDF
    Let G be a vertex-weighted undirected graph. We aim to compute a minimum weight subset of vertices whose removal leads to a graph where the size of each connected component is less than or equal to a given positive number k. If k = 1 we get the classical vertex cover problem. Many formulations are proposed for the problem. The linear relaxations of these formulations are theoretically compared. A polyhedral study is proposed (valid inequalities, facets, separation algorithms). It is shown that the problem can be solved in polynomial time for many special cases including the path, the cycle and the tree cases and also for graphs not containing some special induced sub-graphs. Some (k + 1)-approximation algorithms are also exhibited. Most of the algorithms are implemented and compared. The k-separator problem has many applications. If vertex weights are equal to 1, the size of a minimum k-separator can be used to evaluate the robustness of a graph or a network. Another application consists in partitioning a graph/network into different sub-graphs with respect to different criteria. For example, in the context of social networks, many approaches are proposed to detect communities. By solving a minimum k-separator problem, we get different connected components that may represent communities. The k-separator vertices represent persons making connections between communities. The k-separator problem can then be seen as a special partitioning/clustering graph problemConsidérons un graphe G = (V,E,w) non orienté dont les sommets sont pondérés et un entier k. Le problème à étudier consiste à la construction des algorithmes afin de déterminer le nombre minimum de nœuds qu’il faut enlever au graphe G pour que toutes les composantes connexes restantes contiennent chacune au plus k-sommets. Ce problème nous l’appelons problème de k-Séparateur et on désigne par k-séparateur le sous-ensemble recherché. Il est une généralisation du Vertex Cover qui correspond au cas k = 1 (nombre minimum de sommets intersectant toutes les arêtes du graphe
    corecore