70 research outputs found
Approximate Hypergraph Coloring under Low-discrepancy and Related Promises
A hypergraph is said to be -colorable if its vertices can be colored
with colors so that no hyperedge is monochromatic. -colorability is a
fundamental property (called Property B) of hypergraphs and is extensively
studied in combinatorics. Algorithmically, however, given a -colorable
-uniform hypergraph, it is NP-hard to find a -coloring miscoloring fewer
than a fraction of hyperedges (which is achieved by a random
-coloring), and the best algorithms to color the hypergraph properly require
colors, approaching the trivial bound of as
increases.
In this work, we study the complexity of approximate hypergraph coloring, for
both the maximization (finding a -coloring with fewest miscolored edges) and
minimization (finding a proper coloring using fewest number of colors)
versions, when the input hypergraph is promised to have the following stronger
properties than -colorability:
(A) Low-discrepancy: If the hypergraph has discrepancy ,
we give an algorithm to color the it with colors.
However, for the maximization version, we prove NP-hardness of finding a
-coloring miscoloring a smaller than (resp. )
fraction of the hyperedges when (resp. ). Assuming
the UGC, we improve the latter hardness factor to for almost
discrepancy- hypergraphs.
(B) Rainbow colorability: If the hypergraph has a -coloring such
that each hyperedge is polychromatic with all these colors, we give a
-coloring algorithm that miscolors at most of the
hyperedges when , and complement this with a matching UG
hardness result showing that when , it is hard to even beat the
bound achieved by a random coloring.Comment: Approx 201
A Survey on Approximation in Parameterized Complexity: Hardness and Algorithms
Parameterization and approximation are two popular ways of coping with
NP-hard problems. More recently, the two have also been combined to derive many
interesting results. We survey developments in the area both from the
algorithmic and hardness perspectives, with emphasis on new techniques and
potential future research directions
Strengths and Limitations of Linear Programming Relaxations
Many of the currently best-known approximation algorithms for NP-hard optimization problems are based on Linear Programming (LP) and Semi-definite Programming (SDP) relaxations. Given its power, this class of algorithms seems to contain the most favourable candidates for outperforming the current state-of-the-art approximation guarantees for NP-hard problems, for which there still exists a gap between the inapproximability results and the approximation guarantees that we know how to achieve in polynomial time. In this thesis, we address both the power and the limitations of these relaxations, as well as the connection between the shortcomings of these relaxations and the inapproximability of the underlying problem. In the first part, we study the limitations of LP relaxations of well-known graph problems such as the Vertex Cover problem and the Independent Set problem. We prove that any small LP relaxation for the aforementioned problems, cannot have an integrality gap strictly better than and , respectively. Furthermore, our lower bound for the Independent Set problem also holds for any SDP relaxation. Prior to our work, it was only known that such LP relaxations cannot have an integrality gap better than for the Vertex Cover Problem, and better than for the Independent Set problem. In the second part, we study the so-called knapsack cover inequalities that are used in the current best relaxations for numerous combinatorial optimization problems of covering type. In spite of their widespread use, these inequalities yield LP relaxations of exponential size, over which it is not known how to optimize exactly in polynomial time. We address this issue and obtain LP relaxations of quasi-polynomial size that are at least as strong as that given by the knapsack cover inequalities. In the last part, we show a close connection between structural hardness for k-partite graphs and tight inapproximability results for scheduling problems with precedence constraints. This connection is inspired by a family of integrality gap instances of a certain LP relaxation. Assuming the hardness of an optimization problem on k-partite graphs, we obtain a hardness of for the problem of minimizing the makespan for scheduling with preemption on identical parallel machines, and a super constant inapproximability for the problem of scheduling on related parallel machines. Prior to this result, it was only known that the first problem does not admit a PTAS, and the second problem is NP-hard to approximate within a factor strictly better than 2, assuming the Unique Games Conjecture
Dagstuhl Reports : Volume 1, Issue 2, February 2011
Online Privacy: Towards Informational Self-Determination on the Internet (Dagstuhl Perspectives Workshop 11061) : Simone Fischer-Hübner, Chris Hoofnagle, Kai Rannenberg, Michael Waidner, Ioannis Krontiris and Michael Marhöfer Self-Repairing Programs (Dagstuhl Seminar 11062) : Mauro Pezzé, Martin C. Rinard, Westley Weimer and Andreas Zeller Theory and Applications of Graph Searching Problems (Dagstuhl Seminar 11071) : Fedor V. Fomin, Pierre Fraigniaud, Stephan Kreutzer and Dimitrios M. Thilikos Combinatorial and Algorithmic Aspects of Sequence Processing (Dagstuhl Seminar 11081) : Maxime Crochemore, Lila Kari, Mehryar Mohri and Dirk Nowotka Packing and Scheduling Algorithms for Information and Communication Services (Dagstuhl Seminar 11091) Klaus Jansen, Claire Mathieu, Hadas Shachnai and Neal E. Youn
Non-Uniform Robust Network Design in Planar Graphs
Robust optimization is concerned with constructing solutions that remain
feasible also when a limited number of resources is removed from the solution.
Most studies of robust combinatorial optimization to date made the assumption
that every resource is equally vulnerable, and that the set of scenarios is
implicitly given by a single budget constraint. This paper studies a robustness
model of a different kind. We focus on \textbf{bulk-robustness}, a model
recently introduced~\cite{bulk} for addressing the need to model non-uniform
failure patterns in systems.
We significantly extend the techniques used in~\cite{bulk} to design
approximation algorithm for bulk-robust network design problems in planar
graphs. Our techniques use an augmentation framework, combined with linear
programming (LP) rounding that depends on a planar embedding of the input
graph. A connection to cut covering problems and the dominating set problem in
circle graphs is established. Our methods use few of the specifics of
bulk-robust optimization, hence it is conceivable that they can be adapted to
solve other robust network design problems.Comment: 17 pages, 2 figure
Partitioning Hypergraphs is Hard: Models, Inapproximability, and Applications
We study the balanced -way hypergraph partitioning problem, with a special
focus on its practical applications to manycore scheduling. Given a hypergraph
on nodes, our goal is to partition the node set into parts of size at
most each, while minimizing the cost of the
partitioning, defined as the number of cut hyperedges, possibly also weighted
by the number of partitions they intersect. We show that this problem cannot be
approximated to within a factor of the optimal
solution in polynomial time if the Exponential Time Hypothesis holds, even for
hypergraphs of maximal degree 2. We also study the hardness of the partitioning
problem from a parameterized complexity perspective, and in the more general
case when we have multiple balance constraints.
Furthermore, we consider two extensions of the partitioning problem that are
motivated from practical considerations. Firstly, we introduce the concept of
hyperDAGs to model precedence-constrained computations as hypergraphs, and we
analyze the adaptation of the balanced partitioning problem to this case.
Secondly, we study the hierarchical partitioning problem to model hierarchical
NUMA (non-uniform memory access) effects in modern computer architectures, and
we show that ignoring this hierarchical aspect of the communication cost can
yield significantly weaker solutions.Comment: Published in the 35th ACM Symposium on Parallelism in Algorithms and
Architectures (SPAA 2023
On Tree-Constrained Matchings and Generalizations
We consider the following \textsc{Tree-Constrained Bipartite Matching} problem: Given two rooted trees , and a weight function , find a maximum weight matching between nodes of the two trees, such that none of the matched nodes is an ancestor of another matched node in either of the trees. This generalization of the classical bipartite matching problem appears, for example, in the computational analysis of live cell video data. We show that the problem is -hard and thus, unless , disprove a previous claim that it is solvable in polynomial time. Furthermore, we give a -approximation algorithm based on a combination of the local ratio technique and a careful use of the structure of basic feasible solutions of a natural LP-relaxation, which we also show to have an integrality gap of .
In the second part of the paper, we consider a natural generalization of the problem, where trees are replaced by partially ordered sets (posets). We show that the local ratio technique gives a -approximation for the -dimensional matching generalization of the problem, in which the maximum number of incomparable elements below (or above) any given element in each poset is bounded by . We finally give an almost matching integrality gap example, and an inapproximability result showing that the dependence on is most likely unavoidable
- …