4,223 research outputs found
The Complexity of Packing Edge-Disjoint Paths
We introduce and study the complexity of Path Packing. Given a graph G and a list of paths, the task is to embed the paths edge-disjoint in G. This generalizes the well known Hamiltonian-Path problem.
Since Hamiltonian Path is efficiently solvable for graphs of small treewidth, we study how this result translates to the much more general Path Packing. On the positive side, we give an FPT-algorithm on trees for the number of paths as parameter. Further, we give an XP-algorithm with the combined parameters maximal degree, number of connected components and number of nodes of degree at least three. Surprisingly the latter is an almost tight result by runtime and parameterization. We show an ETH lower bound almost matching our runtime. Moreover, if two of the three values are constant and one is unbounded the problem becomes NP-hard.
Further, we study restrictions to the given list of paths. On the positive side, we present an FPT-algorithm parameterized by the sum of the lengths of the paths. Packing paths of length two is polynomial time solvable, while packing paths of length three is NP-hard. Finally, even the spacial case Exact Path Packing where the paths have to cover every edge in G exactly once is already NP-hard for two paths on 4-regular graphs
Parameterized Complexity of Path Set Packing
In PATH SET PACKING, the input is an undirected graph , a collection of simple paths in , and a positive integer . The problem is to decide
whether there exist edge-disjoint paths in . We study the
parameterized complexity of PATH SET PACKING with respect to both natural and
structural parameters. We show that the problem is -hard with respect to
vertex cover plus the maximum length of a path in , and -hard
respect to pathwidth plus maximum degree plus solution size. These results
answer an open question raised in COCOON 2018. On the positive side, we show an
FPT algorithm parameterized by feedback vertex set plus maximum degree, and
also show an FPT algorithm parameterized by treewidth plus maximum degree plus
maximum length of a path in . Both the positive results complement the
hardness of PATH SET PACKING with respect to any subset of the parameters used
in the FPT algorithms
Distributed Connectivity Decomposition
We present time-efficient distributed algorithms for decomposing graphs with
large edge or vertex connectivity into multiple spanning or dominating trees,
respectively. As their primary applications, these decompositions allow us to
achieve information flow with size close to the connectivity by parallelizing
it along the trees. More specifically, our distributed decomposition algorithms
are as follows:
(I) A decomposition of each undirected graph with vertex-connectivity
into (fractionally) vertex-disjoint weighted dominating trees with total weight
, in rounds.
(II) A decomposition of each undirected graph with edge-connectivity
into (fractionally) edge-disjoint weighted spanning trees with total
weight , in
rounds.
We also show round complexity lower bounds of
and
for the above two decompositions,
using techniques of [Das Sarma et al., STOC'11]. Moreover, our
vertex-connectivity decomposition extends to centralized algorithms and
improves the time complexity of [Censor-Hillel et al., SODA'14] from
to near-optimal .
As corollaries, we also get distributed oblivious routing broadcast with
-competitive edge-congestion and -competitive
vertex-congestion. Furthermore, the vertex connectivity decomposition leads to
near-time-optimal -approximation of vertex connectivity: centralized
and distributed . The former moves
toward the 1974 conjecture of Aho, Hopcroft, and Ullman postulating an
centralized exact algorithm while the latter is the first distributed vertex
connectivity approximation
Pre-Reduction Graph Products: Hardnesses of Properly Learning DFAs and Approximating EDP on DAGs
The study of graph products is a major research topic and typically concerns
the term , e.g., to show that . In this paper, we
study graph products in a non-standard form where is a
"reduction", a transformation of any graph into an instance of an intended
optimization problem. We resolve some open problems as applications.
(1) A tight -approximation hardness for the minimum
consistent deterministic finite automaton (DFA) problem, where is the
sample size. Due to Board and Pitt [Theoretical Computer Science 1992], this
implies the hardness of properly learning DFAs assuming (the
weakest possible assumption).
(2) A tight hardness for the edge-disjoint paths (EDP)
problem on directed acyclic graphs (DAGs), where denotes the number of
vertices.
(3) A tight hardness of packing vertex-disjoint -cycles for large .
(4) An alternative (and perhaps simpler) proof for the hardness of properly
learning DNF, CNF and intersection of halfspaces [Alekhnovich et al., FOCS 2004
and J. Comput.Syst.Sci. 2008]
Narrow sieves for parameterized paths and packings
We present randomized algorithms for some well-studied, hard combinatorial
problems: the k-path problem, the p-packing of q-sets problem, and the
q-dimensional p-matching problem. Our algorithms solve these problems with high
probability in time exponential only in the parameter (k, p, q) and using
polynomial space; the constant bases of the exponentials are significantly
smaller than in previous works. For example, for the k-path problem the
improvement is from 2 to 1.66. We also show how to detect if a d-regular graph
admits an edge coloring with colors in time within a polynomial factor of
O(2^{(d-1)n/2}).
Our techniques build upon and generalize some recently published ideas by I.
Koutis (ICALP 2009), R. Williams (IPL 2009), and A. Bj\"orklund (STACS 2010,
FOCS 2010)
Lossy Kernelization
In this paper we propose a new framework for analyzing the performance of
preprocessing algorithms. Our framework builds on the notion of kernelization
from parameterized complexity. However, as opposed to the original notion of
kernelization, our definitions combine well with approximation algorithms and
heuristics. The key new definition is that of a polynomial size
-approximate kernel. Loosely speaking, a polynomial size
-approximate kernel is a polynomial time pre-processing algorithm that
takes as input an instance to a parameterized problem, and outputs
another instance to the same problem, such that . Additionally, for every , a -approximate solution
to the pre-processed instance can be turned in polynomial time into a
-approximate solution to the original instance .
Our main technical contribution are -approximate kernels of
polynomial size for three problems, namely Connected Vertex Cover, Disjoint
Cycle Packing and Disjoint Factors. These problems are known not to admit any
polynomial size kernels unless . Our approximate
kernels simultaneously beat both the lower bounds on the (normal) kernel size,
and the hardness of approximation lower bounds for all three problems. On the
negative side we prove that Longest Path parameterized by the length of the
path and Set Cover parameterized by the universe size do not admit even an
-approximate kernel of polynomial size, for any , unless
. In order to prove this lower bound we need to combine
in a non-trivial way the techniques used for showing kernelization lower bounds
with the methods for showing hardness of approximationComment: 58 pages. Version 2 contain new results: PSAKS for Cycle Packing and
approximate kernel lower bounds for Set Cover and Hitting Set parameterized
by universe siz
- …