16 research outputs found

    Closing the Gap for Pseudo-Polynomial Strip Packing

    Get PDF
    Two-dimensional packing problems are a fundamental class of optimization problems and Strip Packing is one of the most natural and famous among them. Indeed it can be defined in just one sentence: Given a set of rectangular axis parallel items and a strip with bounded width and infinite height, the objective is to find a packing of the items into the strip minimizing the packing height. We speak of pseudo-polynomial Strip Packing if we consider algorithms with pseudo-polynomial running time with respect to the width of the strip. It is known that there is no pseudo-polynomial time algorithm for Strip Packing with a ratio better than 5/4 unless P = NP. The best algorithm so far has a ratio of 4/3 + epsilon. In this paper, we close the gap between inapproximability result and currently known algorithms by presenting an algorithm with approximation ratio 5/4 + epsilon. The algorithm relies on a new structural result which is the main accomplishment of this paper. It states that each optimal solution can be transformed with bounded loss in the objective such that it has one of a polynomial number of different forms thus making the problem tractable by standard techniques, i.e., dynamic programming. To show the conceptual strength of the approach, we extend our result to other problems as well, e.g., Strip Packing with 90 degree rotations and Contiguous Moldable Task Scheduling, and present algorithms with approximation ratio 5/4 + epsilon for these problems as well

    A tight lower bound for steiner orientation

    Get PDF
    In the STEINER ORIENTATION problem, the input is a mixed graph G (it has both directed and undirected edges) and a set of k terminal pairs T. The question is whether we can orient the undirected edges in a way such that there is a directed s⇝t path for each terminal pair (s,t)∈T. Arkin and Hassin [DAM’02] showed that the STEINER ORIENTATION problem is NP-complete. They also gave a polynomial time algorithm for the special case when k=2 . From the viewpoint of exact algorithms, Cygan, Kortsarz and Nutov [ESA’12, SIDMA’13] designed an XP algorithm running in nO(k) time for all k≥1. Pilipczuk and Wahlström [SODA ’16] showed that the STEINER ORIENTATION problem is W[1]-hard parameterized by k. As a byproduct of their reduction, they were able to show that under the Exponential Time Hypothesis (ETH) of Impagliazzo, Paturi and Zane [JCSS’01] the STEINER ORIENTATION problem does not admit an f(k)⋅no(k/logk) algorithm for any computable function f. That is, the nO(k) algorithm of Cygan et al. is almost optimal. In this paper, we give a short and easy proof that the nO(k) algorithm of Cygan et al. is asymptotically optimal, even if the input graph has genus 1. Formally, we show that the STEINER ORIENTATION problem is W[1]-hard parameterized by the number k of terminal pairs, and, under ETH, cannot be solved in f(k)⋅no(k) time for any function f even if the underlying undirected graph has genus 1. We give a reduction from the GRID TILING problem which has turned out to be very useful in proving W[1]-hardness of several problems on planar graphs. As a result of our work, the main remaining open question is whether STEINER ORIENTATION admits the “square-root phenomenon” on planar graphs (graphs with genus 0): can one obtain an algorithm running in time f(k)⋅nO(k√) for PLANAR STEINER ORIENTATION, or does the lower bound of f(k)⋅no(k) also translate to planar graphs

    Succinct Data Structures for Chordal Graphs

    Get PDF
    We study the problem of approximate shortest path queries in chordal graphs and give a n log n + o(n log n) bit data structure to answer the approximate distance query to within an additive constant of 1 in O(1) time. We study the problem of succinctly storing a static chordal graph to answer adjacency, degree, neighbourhood and shortest path queries. Let G be a chordal graph with n vertices. We design a data structure using the information theoretic minimal n^2/4 + o(n^2) bits of space to support the queries: - whether two vertices u,v are adjacent in time f(n) for any f(n) in omega(1). - the degree of a vertex in O(1) time. - the vertices adjacent to u in (f(n))^2 time per neighbour - the length of the shortest path from u to v in O(nf(n)) tim

    On the Parameterized Complexity of the Expected Coverage Problem

    Get PDF
    The MAXIMUM COVERING LOCATION PROBLEM (MCLP) is a well-studied problem in the field of operations research. Given a network with positive or negative demands on the nodes, a positive integer k, the MCLP seeks to find k potential facility centers in the network such that the neighborhood coverage is maximized. We study the variant of MCLP where edges of the network are subject to random failures due to some disruptive events. One of the popular models capturing the unreliable nature of the facility location is the linear reliability ordering (LRO) model. In this model, with every edge e of the network, we associate its survival probability 0 ≤ pe ≤ 1, or equivalently, its failure probability 1 − pe. The failure correlation in LRO is the following: If an edge e fails then every edge e′ with pe′≤pe surely fails. The task is to identify the positions of k facilities that maximize the expected coverage. We refer to this problem as EXPECTED COVERAGE problem. We study the EXPECTED COVERAGE problem from the parameterized complexity perspective and obtain the following results. 1. For the parameter pathwidth, we show that the EXPECTED COVERAGE problem is W[1]-hard. We find this result a bit surprising, because the variant of the problem with non-negative demands is fixed-parameter tractable (FPT) parameterized by the treewidth of the input graph. 2. We complement the lower bound by the proof that EXPECTED COVERAGE is FPT being parameterized by the treewidth and the maximum vertex degree. We give an algorithm that solves the problem in time 2O(twlogΔ)nO(1), where tw is the treewidth, Δ is the maximum vertex degree, and n the number of vertices of the input graph. In particular, since Δ ≤ n, it means the problem is solvable in time nO(tw), that is, is in XP parameterized by treewidth.publishedVersio

    Dominator Coloring and CD Coloring in Almost Cluster Graphs

    Full text link
    In this paper, we study two popular variants of Graph Coloring -- Dominator Coloring and CD Coloring. In both problems, we are given a graph GG and a natural number \ell as input and the goal is to properly color the vertices with at most \ell colors with specific constraints. In Dominator Coloring, we require for each vV(G)v \in V(G), a color cc such that vv dominates all vertices colored cc. In CD Coloring, we require for each color cc, a vV(G)v \in V(G) which dominates all vertices colored cc. These problems, defined due to their applications in social and genetic networks, have been studied extensively in the last 15 years. While it is known that both problems are fixed-parameter tractable (FPT) when parameterized by (t,)(t,\ell) where tt is the treewidth of GG, we consider strictly structural parameterizations which naturally arise out of the problems' applications. We prove that Dominator Coloring is FPT when parameterized by the size of a graph's cluster vertex deletion (CVD) set and that CD Coloring is FPT parameterized by CVD set size plus the number of remaining cliques. En route, we design a simpler and faster FPT algorithms when the problems are parameterized by the size of a graph's twin cover, a special CVD set. When the parameter is the size of a graph's clique modulator, we design a randomized single-exponential time algorithm for the problems. These algorithms use an inclusion-exclusion based polynomial sieving technique and add to the growing number of applications using this powerful algebraic technique.Comment: 29 pages, 3 figure

    Succinct Data Structures for Chordal Graphs

    Get PDF
    We study the problem of approximate shortest path queries in chordal graphs and give a n log n + o(n log n) bit data structure to answer the approximate distance query to within an additive constant of 1 in O(1) time. We study the problem of succinctly storing a static chordal graph to answer adjacency, degree, neighbourhood and shortest path queries. Let G be a chordal graph with n vertices. We design a data structure using the information theoretic minimal n^2/4 + o(n^2) bits of space to support the queries: whether two vertices u,v are adjacent in time f(n) for any f(n) \in \omega(1). the degree of a vertex in O(1) time. the vertices adjacent to u in O(f(n)^2) time per neighbour the length of the shortest path from u to v in O(n f(n)) tim

    On the Complexity of the Smallest Grammar Problem over Fixed Alphabets

    Get PDF
    In the smallest grammar problem, we are given a word w and we want to compute a preferably small context-free grammar G for the singleton language {w} (where the size of a grammar is the sum of the sizes of its rules, and the size of a rule is measured by the length of its right side). It is known that, for unbounded alphabets, the decision variant of this problem is NP-hard and the optimisation variant does not allow a polynomial-time approximation scheme, unless P = NP. We settle the long-standing open problem whether these hardness results also hold for the more realistic case of a constant-size alphabet. More precisely, it is shown that the smallest grammar problem remains NP-complete (and its optimisation version is APX-hard), even if the alphabet is fixed and has size of at least 17. The corresponding reduction is robust in the sense that it also works for an alternative size-measure of grammars that is commonly used in the literature (i. e., a size measure also taking the number of rules into account), and it also allows to conclude that even computing the number of rules required by a smallest grammar is a hard problem. On the other hand, if the number of nonterminals (or, equivalently, the number of rules) is bounded by a constant, then the smallest grammar problem can be solved in polynomial time, which is shown by encoding it as a problem on graphs with interval structure. However, treating the number of rules as a parameter (in terms of parameterised complexity) yields W[1]-hardness. Furthermore, we present an O(3∣w∣) exact exponential-time algorithm, based on dynamic programming. These three main questions are also investigated for 1-level grammars, i. e., grammars for which only the start rule contains nonterminals on the right side; thus, investigating the impact of the “hierarchical depth” of grammars on the complexity of the smallest grammar problem. In this regard, we obtain for 1-level grammars similar, but slightly stronger results.Peer Reviewe

    Enabling Scalability: Graph Hierarchies and Fault Tolerance

    Get PDF
    In this dissertation, we explore approaches to two techniques for building scalable algorithms. First, we look at different graph problems. We show how to exploit the input graph\u27s inherent hierarchy for scalable graph algorithms. The second technique takes a step back from concrete algorithmic problems. Here, we consider the case of node failures in large distributed systems and present techniques to quickly recover from these. In the first part of the dissertation, we investigate how hierarchies in graphs can be used to scale algorithms to large inputs. We develop algorithms for three graph problems based on two approaches to build hierarchies. The first approach reduces instance sizes for NP-hard problems by applying so-called reduction rules. These rules can be applied in polynomial time. They either find parts of the input that can be solved in polynomial time, or they identify structures that can be contracted (reduced) into smaller structures without loss of information for the specific problem. After solving the reduced instance using an exponential-time algorithm, these previously contracted structures can be uncontracted to obtain an exact solution for the original input. In addition to a simple preprocessing procedure, reduction rules can also be used in branch-and-reduce algorithms where they are successively applied after each branching step to build a hierarchy of problem kernels of increasing computational hardness. We develop reduction-based algorithms for the classical NP-hard problems Maximum Independent Set and Maximum Cut. The second approach is used for route planning in road networks where we build a hierarchy of road segments based on their importance for long distance shortest paths. By only considering important road segments when we are far away from the source and destination, we can substantially speed up shortest path queries. In the second part of this dissertation, we take a step back from concrete graph problems and look at more general problems in high performance computing (HPC). Here, due to the ever increasing size and complexity of HPC clusters, we expect hardware and software failures to become more common in massively parallel computations. We present two techniques for applications to recover from failures and resume computation. Both techniques are based on in-memory storage of redundant information and a data distribution that enables fast recovery. The first technique can be used for general purpose distributed processing frameworks: We identify data that is redundantly available on multiple machines and only introduce additional work for the remaining data that is only available on one machine. The second technique is a checkpointing library engineered for fast recovery using a data distribution method that achieves balanced communication loads. Both our techniques have in common that they work in settings where computation after a failure is continued with less machines than before. This is in contrast to many previous approaches that---in particular for checkpointing---focus on systems that keep spare resources available to replace failed machines. Overall, we present different techniques that enable scalable algorithms. While some of these techniques are specific to graph problems, we also present tools for fault tolerant algorithms and applications in a distributed setting. To show that those can be helpful in many different domains, we evaluate them for graph problems and other applications like phylogenetic tree inference
    corecore