102,641 research outputs found
Improved Hardness for Cut, Interdiction, and Firefighter Problems
We study variants of the classic s-t cut problem and prove the following improved hardness results assuming the Unique Games Conjecture (UGC).
* For Length-Bounded Cut and Shortest Path Interdiction, we show that both problems are hard to approximate within any constant factor, even if we allow bicriteria approximation. If we want to cut vertices or the graph is directed, our hardness ratio for Length-Bounded Cut matches the best approximation ratio up to a constant. Previously, the best hardness ratio was 1.1377 for Length-Bounded Cut and 2 for Shortest Path Interdiction.
* For any constant k >= 2 and epsilon > 0, we show that Directed Multicut with k source-sink pairs is hard to approximate within a factor k - epsilon. This matches the trivial k-approximation algorithm. By a simple reduction, our result for k = 2 implies that Directed Multiway Cut with two terminals (also known as s-t Bicut} is hard to approximate within a factor 2 - epsilon, matching the trivial 2-approximation algorithm.
* Assuming a variant of the UGC (implied by another variant of Bansal and Khot), we prove that it is hard to approximate Resource Minimization Fire Containment within any constant factor. Previously, the best hardness ratio was 2. For directed layered graphs with b layers, our hardness ratio Omega(log b) matches the best approximation algorithm.
Our results are based on a general method of converting an integrality gap instance to a length-control dictatorship test for variants of the s-t cut problem, which may be useful for other problems
Directed Multicut with linearly ordered terminals
Motivated by an application in network security, we investigate the following
"linear" case of Directed Mutlicut. Let be a directed graph which includes
some distinguished vertices . What is the size of the
smallest edge cut which eliminates all paths from to for all ? We show that this problem is fixed-parameter tractable when parametrized in
the cutset size via an algorithm running in time.Comment: 12 pages, 1 figur
Pseudo-Separation for Assessment of Structural Vulnerability of a Network
Based upon the idea that network functionality is impaired if two nodes in a
network are sufficiently separated in terms of a given metric, we introduce two
combinatorial \emph{pseudocut} problems generalizing the classical min-cut and
multi-cut problems. We expect the pseudocut problems will find broad relevance
to the study of network reliability. We comprehensively analyze the
computational complexity of the pseudocut problems and provide three
approximation algorithms for these problems.
Motivated by applications in communication networks with strict
Quality-of-Service (QoS) requirements, we demonstrate the utility of the
pseudocut problems by proposing a targeted vulnerability assessment for the
structure of communication networks using QoS metrics; we perform experimental
evaluations of our proposed approximation algorithms in this context
Network Design Problems with Bounded Distances via Shallow-Light Steiner Trees
In a directed graph with non-correlated edge lengths and costs, the
\emph{network design problem with bounded distances} asks for a cost-minimal
spanning subgraph subject to a length bound for all node pairs. We give a
bi-criteria -approximation for this
problem. This improves on the currently best known linear approximation bound,
at the cost of violating the distance bound by a factor of at
most~.
In the course of proving this result, the related problem of \emph{directed
shallow-light Steiner trees} arises as a subproblem. In the context of directed
graphs, approximations to this problem have been elusive. We present the first
non-trivial result by proposing a
-ap\-proxi\-ma\-tion, where are the
terminals.
Finally, we show how to apply our results to obtain an
-approximation for
\emph{light-weight directed -spanners}. For this, no non-trivial
approximation algorithm has been known before. All running times depends on
and and are polynomial in for any fixed
Approximability of Connected Factors
Finding a d-regular spanning subgraph (or d-factor) of a graph is easy by
Tutte's reduction to the matching problem. By the same reduction, it is easy to
find a minimal or maximal d-factor of a graph. However, if we require that the
d-factor is connected, these problems become NP-hard - finding a minimal
connected 2-factor is just the traveling salesman problem (TSP).
Given a complete graph with edge weights that satisfy the triangle
inequality, we consider the problem of finding a minimal connected -factor.
We give a 3-approximation for all and improve this to an
(r+1)-approximation for even d, where r is the approximation ratio of the TSP.
This yields a 2.5-approximation for even d. The same algorithm yields an
(r+1)-approximation for the directed version of the problem, where r is the
approximation ratio of the asymmetric TSP. We also show that none of these
minimization problems can be approximated better than the corresponding TSP.
Finally, for the decision problem of deciding whether a given graph contains
a connected d-factor, we extend known hardness results.Comment: To appear in the proceedings of WAOA 201
The Densest k-Subhypergraph Problem
The Densest -Subgraph (DS) problem, and its corresponding minimization
problem Smallest -Edge Subgraph (SES), have come to play a central role
in approximation algorithms. This is due both to their practical importance,
and their usefulness as a tool for solving and establishing approximation
bounds for other problems. These two problems are not well understood, and it
is widely believed that they do not an admit a subpolynomial approximation
ratio (although the best known hardness results do not rule this out).
In this paper we generalize both DS and SES from graphs to hypergraphs.
We consider the Densest -Subhypergraph problem (given a hypergraph ,
find a subset of vertices so as to maximize the number of
hyperedges contained in ) and define the Minimum -Union problem (given a
hypergraph, choose of the hyperedges so as to minimize the number of
vertices in their union). We focus in particular on the case where all
hyperedges have size 3, as this is the simplest non-graph setting. For this
case we provide an -approximation (for arbitrary constant )
for Densest -Subhypergraph and an -approximation for
Minimum -Union. We also give an -approximation for Minimum
-Union in general hypergraphs. Finally, we examine the interesting special
case of interval hypergraphs (instances where the vertices are a subset of the
natural numbers and the hyperedges are intervals of the line) and prove that
both problems admit an exact polynomial time solution on these instances.Comment: 21 page
Compression via Matroids: A Randomized Polynomial Kernel for Odd Cycle Transversal
The Odd Cycle Transversal problem (OCT) asks whether a given graph can be
made bipartite by deleting at most of its vertices. In a breakthrough
result Reed, Smith, and Vetta (Operations Research Letters, 2004) gave a
\BigOh(4^kkmn) time algorithm for it, the first algorithm with polynomial
runtime of uniform degree for every fixed . It is known that this implies a
polynomial-time compression algorithm that turns OCT instances into equivalent
instances of size at most \BigOh(4^k), a so-called kernelization. Since then
the existence of a polynomial kernel for OCT, i.e., a kernelization with size
bounded polynomially in , has turned into one of the main open questions in
the study of kernelization.
This work provides the first (randomized) polynomial kernelization for OCT.
We introduce a novel kernelization approach based on matroid theory, where we
encode all relevant information about a problem instance into a matroid with a
representation of size polynomial in . For OCT, the matroid is built to
allow us to simulate the computation of the iterative compression step of the
algorithm of Reed, Smith, and Vetta, applied (for only one round) to an
approximate odd cycle transversal which it is aiming to shrink to size . The
process is randomized with one-sided error exponentially small in , where
the result can contain false positives but no false negatives, and the size
guarantee is cubic in the size of the approximate solution. Combined with an
\BigOh(\sqrt{\log n})-approximation (Agarwal et al., STOC 2005), we get a
reduction of the instance to size \BigOh(k^{4.5}), implying a randomized
polynomial kernelization.Comment: Minor changes to agree with SODA 2012 version of the pape
- …