70 research outputs found

    Approximate Hypergraph Coloring under Low-discrepancy and Related Promises

    Get PDF
    A hypergraph is said to be χ\chi-colorable if its vertices can be colored with χ\chi colors so that no hyperedge is monochromatic. 22-colorability is a fundamental property (called Property B) of hypergraphs and is extensively studied in combinatorics. Algorithmically, however, given a 22-colorable kk-uniform hypergraph, it is NP-hard to find a 22-coloring miscoloring fewer than a fraction 2k+12^{-k+1} of hyperedges (which is achieved by a random 22-coloring), and the best algorithms to color the hypergraph properly require n11/k\approx n^{1-1/k} colors, approaching the trivial bound of nn as kk increases. In this work, we study the complexity of approximate hypergraph coloring, for both the maximization (finding a 22-coloring with fewest miscolored edges) and minimization (finding a proper coloring using fewest number of colors) versions, when the input hypergraph is promised to have the following stronger properties than 22-colorability: (A) Low-discrepancy: If the hypergraph has discrepancy k\ell \ll \sqrt{k}, we give an algorithm to color the it with nO(2/k)\approx n^{O(\ell^2/k)} colors. However, for the maximization version, we prove NP-hardness of finding a 22-coloring miscoloring a smaller than 2O(k)2^{-O(k)} (resp. kO(k)k^{-O(k)}) fraction of the hyperedges when =O(logk)\ell = O(\log k) (resp. =2\ell=2). Assuming the UGC, we improve the latter hardness factor to 2O(k)2^{-O(k)} for almost discrepancy-11 hypergraphs. (B) Rainbow colorability: If the hypergraph has a (k)(k-\ell)-coloring such that each hyperedge is polychromatic with all these colors, we give a 22-coloring algorithm that miscolors at most kΩ(k)k^{-\Omega(k)} of the hyperedges when k\ell \ll \sqrt{k}, and complement this with a matching UG hardness result showing that when =k\ell =\sqrt{k}, it is hard to even beat the 2k+12^{-k+1} bound achieved by a random coloring.Comment: Approx 201

    A Survey on Approximation in Parameterized Complexity: Hardness and Algorithms

    Get PDF
    Parameterization and approximation are two popular ways of coping with NP-hard problems. More recently, the two have also been combined to derive many interesting results. We survey developments in the area both from the algorithmic and hardness perspectives, with emphasis on new techniques and potential future research directions

    Strengths and Limitations of Linear Programming Relaxations

    Get PDF
    Many of the currently best-known approximation algorithms for NP-hard optimization problems are based on Linear Programming (LP) and Semi-definite Programming (SDP) relaxations. Given its power, this class of algorithms seems to contain the most favourable candidates for outperforming the current state-of-the-art approximation guarantees for NP-hard problems, for which there still exists a gap between the inapproximability results and the approximation guarantees that we know how to achieve in polynomial time. In this thesis, we address both the power and the limitations of these relaxations, as well as the connection between the shortcomings of these relaxations and the inapproximability of the underlying problem. In the first part, we study the limitations of LP relaxations of well-known graph problems such as the Vertex Cover problem and the Independent Set problem. We prove that any small LP relaxation for the aforementioned problems, cannot have an integrality gap strictly better than 22 and ω(1)\omega(1), respectively. Furthermore, our lower bound for the Independent Set problem also holds for any SDP relaxation. Prior to our work, it was only known that such LP relaxations cannot have an integrality gap better than 1.51.5 for the Vertex Cover Problem, and better than 22 for the Independent Set problem. In the second part, we study the so-called knapsack cover inequalities that are used in the current best relaxations for numerous combinatorial optimization problems of covering type. In spite of their widespread use, these inequalities yield LP relaxations of exponential size, over which it is not known how to optimize exactly in polynomial time. We address this issue and obtain LP relaxations of quasi-polynomial size that are at least as strong as that given by the knapsack cover inequalities. In the last part, we show a close connection between structural hardness for k-partite graphs and tight inapproximability results for scheduling problems with precedence constraints. This connection is inspired by a family of integrality gap instances of a certain LP relaxation. Assuming the hardness of an optimization problem on k-partite graphs, we obtain a hardness of 2ε2-\varepsilon for the problem of minimizing the makespan for scheduling with preemption on identical parallel machines, and a super constant inapproximability for the problem of scheduling on related parallel machines. Prior to this result, it was only known that the first problem does not admit a PTAS, and the second problem is NP-hard to approximate within a factor strictly better than 2, assuming the Unique Games Conjecture

    Dagstuhl Reports : Volume 1, Issue 2, February 2011

    Get PDF
    Online Privacy: Towards Informational Self-Determination on the Internet (Dagstuhl Perspectives Workshop 11061) : Simone Fischer-Hübner, Chris Hoofnagle, Kai Rannenberg, Michael Waidner, Ioannis Krontiris and Michael Marhöfer Self-Repairing Programs (Dagstuhl Seminar 11062) : Mauro Pezzé, Martin C. Rinard, Westley Weimer and Andreas Zeller Theory and Applications of Graph Searching Problems (Dagstuhl Seminar 11071) : Fedor V. Fomin, Pierre Fraigniaud, Stephan Kreutzer and Dimitrios M. Thilikos Combinatorial and Algorithmic Aspects of Sequence Processing (Dagstuhl Seminar 11081) : Maxime Crochemore, Lila Kari, Mehryar Mohri and Dirk Nowotka Packing and Scheduling Algorithms for Information and Communication Services (Dagstuhl Seminar 11091) Klaus Jansen, Claire Mathieu, Hadas Shachnai and Neal E. Youn

    Non-Uniform Robust Network Design in Planar Graphs

    Get PDF
    Robust optimization is concerned with constructing solutions that remain feasible also when a limited number of resources is removed from the solution. Most studies of robust combinatorial optimization to date made the assumption that every resource is equally vulnerable, and that the set of scenarios is implicitly given by a single budget constraint. This paper studies a robustness model of a different kind. We focus on \textbf{bulk-robustness}, a model recently introduced~\cite{bulk} for addressing the need to model non-uniform failure patterns in systems. We significantly extend the techniques used in~\cite{bulk} to design approximation algorithm for bulk-robust network design problems in planar graphs. Our techniques use an augmentation framework, combined with linear programming (LP) rounding that depends on a planar embedding of the input graph. A connection to cut covering problems and the dominating set problem in circle graphs is established. Our methods use few of the specifics of bulk-robust optimization, hence it is conceivable that they can be adapted to solve other robust network design problems.Comment: 17 pages, 2 figure

    Partitioning Hypergraphs is Hard: Models, Inapproximability, and Applications

    Full text link
    We study the balanced kk-way hypergraph partitioning problem, with a special focus on its practical applications to manycore scheduling. Given a hypergraph on nn nodes, our goal is to partition the node set into kk parts of size at most (1+ϵ)nk(1+\epsilon)\cdot \frac{n}{k} each, while minimizing the cost of the partitioning, defined as the number of cut hyperedges, possibly also weighted by the number of partitions they intersect. We show that this problem cannot be approximated to within a n1/polyloglognn^{1/\text{poly} \log\log n} factor of the optimal solution in polynomial time if the Exponential Time Hypothesis holds, even for hypergraphs of maximal degree 2. We also study the hardness of the partitioning problem from a parameterized complexity perspective, and in the more general case when we have multiple balance constraints. Furthermore, we consider two extensions of the partitioning problem that are motivated from practical considerations. Firstly, we introduce the concept of hyperDAGs to model precedence-constrained computations as hypergraphs, and we analyze the adaptation of the balanced partitioning problem to this case. Secondly, we study the hierarchical partitioning problem to model hierarchical NUMA (non-uniform memory access) effects in modern computer architectures, and we show that ignoring this hierarchical aspect of the communication cost can yield significantly weaker solutions.Comment: Published in the 35th ACM Symposium on Parallelism in Algorithms and Architectures (SPAA 2023

    On Tree-Constrained Matchings and Generalizations

    Get PDF
    We consider the following \textsc{Tree-Constrained Bipartite Matching} problem: Given two rooted trees T1=(V1,E1)T_1=(V_1,E_1), T2=(V2,E2)T_2=(V_2,E_2) and a weight function w:V1×V2R+w: V_1\times V_2 \mapsto \mathbb{R}_+, find a maximum weight matching M\mathcal{M} between nodes of the two trees, such that none of the matched nodes is an ancestor of another matched node in either of the trees. This generalization of the classical bipartite matching problem appears, for example, in the computational analysis of live cell video data. We show that the problem is APX\mathcal{APX}-hard and thus, unless P=NP\mathcal{P} = \mathcal{NP}, disprove a previous claim that it is solvable in polynomial time. Furthermore, we give a 22-approximation algorithm based on a combination of the local ratio technique and a careful use of the structure of basic feasible solutions of a natural LP-relaxation, which we also show to have an integrality gap of 2o(1)2-o(1). In the second part of the paper, we consider a natural generalization of the problem, where trees are replaced by partially ordered sets (posets). We show that the local ratio technique gives a 2kρ2k\rho-approximation for the kk-dimensional matching generalization of the problem, in which the maximum number of incomparable elements below (or above) any given element in each poset is bounded by ρ\rho. We finally give an almost matching integrality gap example, and an inapproximability result showing that the dependence on ρ\rho is most likely unavoidable
    corecore