135 research outputs found
Algorithms and Bounds for Very Strong Rainbow Coloring
A well-studied coloring problem is to assign colors to the edges of a graph
so that, for every pair of vertices, all edges of at least one shortest
path between them receive different colors. The minimum number of colors
necessary in such a coloring is the strong rainbow connection number
(\src(G)) of the graph. When proving upper bounds on \src(G), it is natural
to prove that a coloring exists where, for \emph{every} shortest path between
every pair of vertices in the graph, all edges of the path receive different
colors. Therefore, we introduce and formally define this more restricted edge
coloring number, which we call \emph{very strong rainbow connection number}
(\vsrc(G)).
In this paper, we give upper bounds on \vsrc(G) for several graph classes,
some of which are tight. These immediately imply new upper bounds on \src(G)
for these classes, showing that the study of \vsrc(G) enables meaningful
progress on bounding \src(G). Then we study the complexity of the problem to
compute \vsrc(G), particularly for graphs of bounded treewidth, and show this
is an interesting problem in its own right. We prove that \vsrc(G) can be
computed in polynomial time on cactus graphs; in contrast, this question is
still open for \src(G). We also observe that deciding whether \vsrc(G) = k
is fixed-parameter tractable in and the treewidth of . Finally, on
general graphs, we prove that there is no polynomial-time algorithm to decide
whether \vsrc(G) \leq 3 nor to approximate \vsrc(G) within a factor
, unless PNP
Solving Partition Problems Almost Always Requires Pushing Many Vertices Around
A fundamental graph problem is to recognize whether the vertex set of a graph G can be bipartitioned into sets A and B such that G[A] and G[B] satisfy properties Pi_A and Pi_B, respectively. This so-called (Pi_A,Pi_B)-Recognition problem generalizes amongst others the recognition of 3-colorable, bipartite, split, and monopolar graphs. A powerful algorithmic technique that can be used to obtain fixed-parameter algorithms for many cases of (Pi_A,Pi_B)-Recognition, as well as several other problems, is the pushing process. For bipartition problems, the process starts with an "almost correct" bipartition (A\u27,B\u27), and pushes appropriate vertices from A\u27 to B\u27 and vice versa to eventually arrive at a correct bipartition.
In this paper, we study whether (Pi_A,Pi_B)-Recognition problems for which the pushing process yields fixed-parameter algorithms also admit polynomial problem kernels. In our study, we focus on the first level above triviality, where Pi_A is the set of P_3-free graphs (disjoint unions of cliques, or cluster graphs), the parameter is the number of clusters in the cluster graph G[A], and Pi_B is characterized by a set H of connected forbidden induced subgraphs. We prove that, under the assumption that NP not subseteq coNP/poly, (Pi_A,Pi_B)-Recognition admits a polynomial kernel if and only if H contains a graph of order at most 2. In both the kernelization and the lower bound results, we make crucial use of the pushing process
Complexity Framework for Forbidden Subgraphs
For any finite set H={H1,…,Hp} of graphs, a graph is H-subgraph-free if it does not contain any of H1,…,Hp as a subgraph. Similar to known meta-classifications for the minor and topological minor relations, we give a meta-classification for the subgraph relation. Our framework classifies if problems are "efficiently solvable" or "computationally hard" for H-subgraph-free graphs. The conditions are that the problem should be efficiently solvable on graphs of bounded treewidth, computationally hard on subcubic graphs, and computational hardness is preserved under edge subdivision. We show that all problems satisfying these conditions are efficiently solvable if H contains a disjoint union of one or more paths and subdivided claws, and are computationally hard otherwise. To illustrate the broad applicability of our framework, we study partitioning, covering and packing problems, network design problems and width parameter problems. We apply the framework to obtain a dichotomy between polynomial-time solvability and NP-completeness. For other problems we obtain a dichotomy between almost-linear-time solvability and having no subquadratic-time algorithm (conditioned on some hardness hypotheses). Along the way we unify and strengthen known results from the literature
Reconstructing the degree sequence of a sparse graph from a partial deck
The deck of a graph G is the multiset of cards {G − v : v ∈
V (G)}. Myrvold (1992) showed that the degree sequence of
a graph on n ≥ 7 vertices can be reconstructed from any
deck missing one card. We prove that the degree sequence
of a graph with average degree d can be reconstructed from
any deck missing O(n/d3) cards. In particular, in the case of
graphs that can be embedded on a fixed surface (e.g. planar
graphs), the degree sequence can be reconstructed even when
a linear number of the cards are missing
Finding Large Set Covers Faster via the Representation Method
The worst-case fastest known algorithm for the Set Cover problem on universes with elements still essentially is the simple -time dynamic programming algorithm, and no non-trivial consequences of an -time algorithm are known. Motivated by this chasm, we study the following natural question: Which instances of Set Cover can we solve faster than the simple dynamic programming algorithm? Specifically, we give a Monte Carlo algorithm that determines the existence of a set cover of size in time. Our approach is also applicable to Set Cover instances with exponentially many sets: By reducing the task of finding the chromatic number of a given -vertex graph to Set Cover in the natural way, we show there is an -time randomized algorithm that given integer , outputs NO if and YES with constant probability if . On a high level, our results are inspired by the `representation method' of Howgrave-Graham and Joux~[EUROCRYPT'10] and obtained by only evaluating a randomly sampled subset of the table entries of a dynamic programming algorithm
A short note on Merlin-Arthur protocols for subset sum
In the subset sum problem we are given n positive integers along with a target integer t. A solution is a subset of these integers summing to t. In this short note we show that for a given subset sum instance there is a proof of size of what the number of solutions is that can be constructed in time and can be probabilistically verified in time with at most constant error probability. Here, the notation omits factors polynomial in the input size
Improving Schroeppel and Shamir's algorithm for subset sum via orthogonal vectors.
We present an O∗(20.5n) time and O∗(20.249999n) space randomized algorithm for solving worst-case Subset Sum instances with n integers. This is the first improvement over the long-standing O∗(2n/2) time and O∗(2n/4) space algorithm due to Schroeppel and Shamir (FOCS 1979). We breach this gap in two steps: (1) We present a space efficient reduction to the Orthogonal Vectors Problem (OV), one of the most central problem in Fine-Grained Complexity. The reduction is established via an intricate combination of the method of Schroeppel and Shamir, and the representation technique introduced by Howgrave-Graham and Joux (EUROCRYPT 2010) for designing Subset Sum algorithms for the average case regime. (2) We provide an algorithm for OV that detects an orthogonal pair among N given vectors in {0,1}d with support size d/4 in time Õ(N· 2d/d d/4). Our algorithm for OV is based on and refines the representative families framework developed by Fomin, Lokshtanov, Panolan and Saurabh (J. ACM 2016). Our reduction uncovers a curious tight relation between Subset Sum and OV, because any improvement of our algorithm for OV would imply an improvement over the runtime of Schroeppel and Shamir, which is also a long standing open problem
Detecting Feedback Vertex Sets of Size in Time
In the Feedback Vertex Set problem, one is given an undirected graph and an integer , and one needs to determine whether there exists a set of vertices that intersects all cycles of (a so-called feedback vertex set). Feedback Vertex Set is one of the most central problems in parameterized complexity: It served as an excellent test bed for many important algorithmic techniques in the field such as Iterative Compression~[Guo et al. (JCSS'06)], Randomized Branching~[Becker et al. (J. Artif. Intell. Res'00)] and Cut\&Count~[Cygan et al. (FOCS'11)]. In particular, there has been a long race for the smallest dependence in run times of the type , where the notation omits factors polynomial in . This race seemed to be run in 2011, when a randomized algorithm time algorithm based on Cut\&Count was introduced. In this work, we show the contrary and give a time randomized algorithm. Our algorithm combines all mentioned techniques with substantial new ideas: First, we show that, given a feedback vertex set of size of bounded average degree, a tree decomposition of width can be found in polynomial time. Second, we give a randomized branching strategy inspired by the one from~[Becker et al. (J. Artif. Intell. Res'00)] to reduce to the aforementioned bounded average degree setting. Third, we obtain significant run time improvements by employing fast matrix multiplication
Improving Schroeppel and Shamir's Algorithm for Subset Sum via Orthogonal Vectors
We present an time and space randomized algorithm for solving worst-case Subset Sum instances with integers. This is the first improvement over the long-standing time and space algorithm due to Schroeppel and Shamir (FOCS 1979). We breach this gap in two steps: (1) We present a space efficient reduction to the Orthogonal Vectors Problem (OV), one of the most central problem in Fine-Grained Complexity. The reduction is established via an intricate combination of the method of Schroeppel and Shamir, and the representation technique introduced by Howgrave-Graham and Joux (EUROCRYPT 2010) for designing Subset Sum algorithms for the average case regime. (2) We provide an algorithm for OV that detects an orthogonal pair among given vectors in with support size in time . Our algorithm for OV is based on and refines the representative families framework developed by Fomin, Lokshtanov, Panolan and Saurabh (J. ACM 2016). Our reduction uncovers a curious tight relation between Subset Sum and OV, because any improvement of our algorithm for OV would imply an improvement over the runtime of Schroeppel and Shamir, which is also a long standing open problem
Role coloring bipartite graphs
A k-role coloring of a graph G is an assignment of colors to the vertices of G such that every color is used at least once and if any two vertices are assigned the same color, then their neighborhood are assigned the same set of colors. By definition, every graph on n vertices admits an n-role coloring. While for every graph on n vertices, it is trivial to decide if it admits a 1-role coloring, determining whether a graph admits a k-role coloring is a notoriously hard problem for k greater than or equal to 2. In fact, it is known that k-Role coloring is NP-complete for k at least 2 on general graph class. There has been extensive research on the complexity of k-role coloring on various hereditary graph classes. Furthering this direction of research, we show that k-Role coloring is NP-complete on bipartite graphs for k at least 3 (while it is trivial for k=2). We complement the hardness result by characterizing 3-role colorable bipartite chain graphs, leading to a polynomial time algorithm for 3-Role coloring for this class of graphs. We further show that 2-Role coloring is NP-complete for graphs that are d vertices or edges away from the class of bipartite graphs, even when d=1
- …