68 research outputs found
Efficient Approximation Schemes for Uniform-Cost Clustering Problems in Planar Graphs
We consider the k-Median problem on planar graphs: given an edge-weighted planar graph G, a set of clients C subseteq V(G), a set of facilities F subseteq V(G), and an integer parameter k, the task is to find a set of at most k facilities whose opening minimizes the total connection cost of clients, where each client contributes to the cost with the distance to the closest open facility. We give two new approximation schemes for this problem:
- FPT Approximation Scheme: for any epsilon>0, in time 2^{O(k epsilon^{-3} log (k epsilon^{-1}))}* n^O(1) we can compute a solution that has connection cost at most (1+epsilon) times the optimum, with high probability.
- Efficient Bicriteria Approximation Scheme: for any epsilon>0, in time 2^{O(epsilon^{-5} log (epsilon^{-1}))}* n^O(1) we can compute a set of at most (1+epsilon)k facilities whose opening yields connection cost at most (1+epsilon) times the optimum connection cost for opening at most k facilities, with high probability.
As a direct corollary of the second result we obtain an EPTAS for Uniform Facility Location on planar graphs, with same running time.
Our main technical tool is a new construction of a "coreset for facilities" for k-Median in planar graphs: we show that in polynomial time one can compute a subset of facilities F_0 subseteq F of size k * (log n/epsilon)^O(epsilon^{-3}) with a guarantee that there is a (1+epsilon)-approximate solution contained in F_0
Fast Algorithms for Parameterized Problems with Relaxed Disjointness Constraints
In parameterized complexity, it is a natural idea to consider different
generalizations of classic problems. Usually, such generalization are obtained
by introducing a "relaxation" variable, where the original problem corresponds
to setting this variable to a constant value. For instance, the problem of
packing sets of size at most into a given universe generalizes the Maximum
Matching problem, which is recovered by taking . Most often, the
complexity of the problem increases with the relaxation variable, but very
recently Abasi et al. have given a surprising example of a problem ---
-Simple -Path --- that can be solved by a randomized algorithm with
running time . That is, the complexity of the
problem decreases with . In this paper we pursue further the direction
sketched by Abasi et al. Our main contribution is a derandomization tool that
provides a deterministic counterpart of the main technical result of Abasi et
al.: the algorithm for -Monomial
Detection, which is the problem of finding a monomial of total degree and
individual degrees at most in a polynomial given as an arithmetic circuit.
Our technique works for a large class of circuits, and in particular it can be
used to derandomize the result of Abasi et al. for -Simple -Path. On our
way to this result we introduce the notion of representative sets for
multisets, which may be of independent interest. Finally, we give two more
examples of problems that were already studied in the literature, where the
same relaxation phenomenon happens. The first one is a natural relaxation of
the Set Packing problem, where we allow the packed sets to overlap at each
element at most times. The second one is Degree Bounded Spanning Tree,
where we seek for a spanning tree of the graph with a small maximum degree
Edge Bipartization Faster Than 2^k
In the Edge Bipartization problem one is given an undirected graph and an
integer , and the question is whether edges can be deleted from so
that it becomes bipartite. In 2006, Guo et al. [J. Comput. Syst. Sci.,
72(8):1386-1396, 2006] proposed an algorithm solving this problem in time
; today, this algorithm is a textbook example of an application of
the iterative compression technique. Despite extensive progress in the
understanding of the parameterized complexity of graph separation problems in
the recent years, no significant improvement upon this result has been yet
reported.
We present an algorithm for Edge Bipartization that works in time , which is the first algorithm with the running time dependence on the
parameter better than . To this end, we combine the general iterative
compression strategy of Guo et al. [J. Comput. Syst. Sci., 72(8):1386-1396,
2006], the technique proposed by Wahlstrom [SODA 2014, 1762-1781] of using a
polynomial-time solvable relaxation in the form of a Valued Constraint
Satisfaction Problem to guide a bounded-depth branching algorithm, and an
involved Measure & Conquer analysis of the recursion tree
Computing cutwidth and pathwidth of semi-complete digraphs via degree orderings
The notions of cutwidth and pathwidth of digraphs play a central role in the containment theory for tournaments, or more generally semi-complete digraphs, developed in a recent series of papers by Chudnovsky, Fradkin, Kim, Scott, and Seymour (Maria Chudnovsky, Alexandra Fradkin, and Paul Seymour, 2012; Maria Chudnovsky, Alex Scott, and Paul Seymour, 2011; Maria Chudnovsky and Paul D. Seymour, 2011; Alexandra Fradkin and Paul Seymour, 2010; Alexandra Fradkin and Paul Seymour, 2011; Ilhee Kim and Paul Seymour, 2012). In this work we introduce a new approach to computing these width measures on semi-complete digraphs, via degree orderings. Using the new technique we are able to reprove the main results of (Maria Chudnovsky, Alexandra Fradkin, and Paul Seymour, 2012; Alexandra Fradkin and Paul Seymour, 2011) in a unified and significantly simplified way, as well as obtain new results. First, we present polynomial-time approximation algorithms for both cutwidth and pathwidth, faster and simpler than the previously known ones; the most significant improvement is in case of pathwidth, where instead of previously known O(OPT)-approximation in fixed-parameter tractable time (Fedor V. Fomin and Michal Pilipczuk, 2013) we obtain a constant-factor approximation in polynomial time. Secondly, by exploiting the new set of obstacles for cutwidth and pathwidth, we show that topological containment and immersion in semi-complete digraphs can be tested in single-exponential fixed-parameter tractable time. Finally, we present how the new approach can be used to obtain exact fixed-parameter tractable algorithms for cutwidth and pathwidth, with single-exponential running time dependency on the optimal width.publishedVersio
Optimizing Tree Decompositions in MSO
The classic algorithm of Bodlaender and Kloks solves the following problem in linear fixed-parameter time: given a tree decomposition of a graph of (possibly suboptimal) width k, compute an optimum-width tree decomposition of the graph. In this work, we prove that this problem can also be solved in MSO in the following sense: for every positive integer k, there is an MSO transduction from tree decompositions of width k to tree decompositions of optimum width. Together with our recent results, this implies that for every k there exists an MSO transduction which inputs a graph of treewidth k, and nondeterministically outputs its tree decomposition of optimum width
Finding Hamiltonian Cycle in Graphs of Bounded Treewidth: Experimental Evaluation
The notion of treewidth, introduced by Robertson and Seymour in their seminal Graph Minors series, turned out to have tremendous impact on graph algorithmics. Many hard computational problems on graphs turn out to be efficiently solvable in graphs of bounded treewidth: graphs that can be sweeped with separators of bounded size. These efficient algorithms usually follow the dynamic programming paradigm.
In the recent years, we have seen a rapid and quite unexpected development of involved techniques for solving various computational problems in graphs of bounded treewidth. One of the most surprising directions is the development of algorithms for connectivity problems that have only single-exponential dependency (i.e., 2^{{O}(t)}) on the treewidth in the running time bound, as opposed to slightly superexponential (i.e., 2^{{O}(t log t)}) stemming from more naive approaches. In this work, we perform a thorough experimental evaluation of these approaches in the context of one of the most classic connectivity problem, namely Hamiltonian Cycle
Everything you always wanted to know about the parameterized complexity of Subgraph Isomorphism (but were afraid to ask)
Given two graphs H and G, the Subgraph Isomorphism problem asks if H is isomorphic to a subgraph of G. While NP-hard in general, algorithms exist for various parameterized versions of the problem. However, the literature contains very little guidance on which combinations of parameters can or cannot be exploited algorithmically. Our goal is to systematically investigate the possible parameterized algorithms that can exist for Subgraph Isomorphism.
We develop a framework involving 10 relevant parameters for each of H and G (such as treewidth, pathwidth, genus, maximum degree, number of vertices, number of components, etc.), and ask if an algorithm with running time f1_(p_1,p_2,...,p_l).n^f_2(p_(l+1),...,p_k) exists, where each of p_1,...,p_k is one of the 10 parameters depending only on H or G. We show that all the questions arising in this framework are answered by a set of 11 maximal positive results (algorithms) and a set of 17 maximal negative results (hardness proofs); some of these results already appear in the literature, while others are new in this paper.
On the algorithmic side, our study reveals for example that an unexpected combination of bounded degree, genus, and feedback vertex set number of G gives rise to a highly nontrivial algorithm for Subgraph Isomorphism. On the hardness side, we present W[1]-hardness proofs under extremely restricted conditions, such as when H is a bounded-degree tree of constant pathwidth and G is a planar graph of bounded pathwidth
- âŠ