10,100 research outputs found
Largest reduced neighborhood clique cover number revisited
Let be a graph and . The largest reduced neighborhood clique
cover number of , denoted by , is the largest, overall
-shallow minors of , of the smallest number of cliques that can cover
any closed neighborhood of a vertex in . It is known that
, where is an incomparability graph and is
the number of leaves in a largest shallow minor which is isomorphic to an
induced star on leaves. In this paper we give an overview of the
properties of including the connections to the greatest
reduced average density of , or , introduce the class
of graphs with bounded neighborhood clique cover number, and derive a simple
lower and an upper bound for this important graph parameter. We announce two
conjectures, one for the value of , and another for a
separator theorem (with respect to a certain measure) for an interesting class
of graphs, namely the class of incomparability graphs which we suspect to have
a polynomial bounded neighborhood clique cover number, when the size of a
largest induced star is bounded.Comment: The results in this paper were presented in 48th Southeastern
Conference in Combinatorics, Graph Theory and Computing, Florida Atlantic
University, Boca Raton, March 201
Structural Rounding: Approximation Algorithms for Graphs Near an Algorithmically Tractable Class
We develop a framework for generalizing approximation algorithms from the structural graph algorithm literature so that they apply to graphs somewhat close to that class (a scenario we expect is common when working with real-world networks) while still guaranteeing approximation ratios. The idea is to edit a given graph via vertex- or edge-deletions to put the graph into an algorithmically tractable class, apply known approximation algorithms for that class, and then lift the solution to apply to the original graph. We give a general characterization of when an optimization problem is amenable to this approach, and show that it includes many well-studied graph problems, such as Independent Set, Vertex Cover, Feedback Vertex Set, Minimum Maximal Matching, Chromatic Number, (l-)Dominating Set, Edge (l-)Dominating Set, and Connected Dominating Set.
To enable this framework, we develop new editing algorithms that find the approximately-fewest edits required to bring a given graph into one of a few important graph classes (in some cases these are bicriteria algorithms which simultaneously approximate both the number of editing operations and the target parameter of the family). For bounded degeneracy, we obtain an O(r log{n})-approximation and a bicriteria (4,4)-approximation which also extends to a smoother bicriteria trade-off. For bounded treewidth, we obtain a bicriteria (O(log^{1.5} n), O(sqrt{log w}))-approximation, and for bounded pathwidth, we obtain a bicriteria (O(log^{1.5} n), O(sqrt{log w} * log n))-approximation. For treedepth 2 (related to bounded expansion), we obtain a 4-approximation. We also prove complementary hardness-of-approximation results assuming P != NP: in particular, these problems are all log-factor inapproximable, except the last which is not approximable below some constant factor 2 (assuming UGC)
Phase Transitions of the Typical Algorithmic Complexity of the Random Satisfiability Problem Studied with Linear Programming
Here we study the NP-complete -SAT problem. Although the worst-case
complexity of NP-complete problems is conjectured to be exponential, there
exist parametrized random ensembles of problems where solutions can typically
be found in polynomial time for suitable ranges of the parameter. In fact,
random -SAT, with as control parameter, can be solved quickly
for small enough values of . It shows a phase transition between a
satisfiable phase and an unsatisfiable phase. For branch and bound algorithms,
which operate in the space of feasible Boolean configurations, the empirically
hardest problems are located only close to this phase transition. Here we study
-SAT () and the related optimization problem MAX-SAT by a linear
programming approach, which is widely used for practical problems and allows
for polynomial run time. In contrast to branch and bound it operates outside
the space of feasible configurations. On the other hand, finding a solution
within polynomial time is not guaranteed. We investigated several variants like
including artificial objective functions, so called cutting-plane approaches,
and a mapping to the NP-complete vertex-cover problem. We observed several
easy-hard transitions, from where the problems are typically solvable (in
polynomial time) using the given algorithms, respectively, to where they are
not solvable in polynomial time. For the related vertex-cover problem on random
graphs these easy-hard transitions can be identified with structural properties
of the graphs, like percolation transitions. For the present random -SAT
problem we have investigated numerous structural properties also exhibiting
clear transitions, but they appear not be correlated to the here observed
easy-hard transitions. This renders the behaviour of random -SAT more
complex than, e.g., the vertex-cover problem.Comment: 11 pages, 5 figure
Edge-decompositions of graphs with high minimum degree
A fundamental theorem of Wilson states that, for every graph , every
sufficiently large -divisible clique has an -decomposition. Here a graph
is -divisible if divides and the greatest common divisor
of the degrees of divides the greatest common divisor of the degrees of
, and has an -decomposition if the edges of can be covered by
edge-disjoint copies of . We extend this result to graphs which are
allowed to be far from complete. In particular, together with a result of
Dross, our results imply that every sufficiently large -divisible graph of
minimum degree at least has a -decomposition. This
significantly improves previous results towards the long-standing conjecture of
Nash-Williams that every sufficiently large -divisible graph with minimum
degree at least has a -decomposition. We also obtain the
asymptotically correct minimum degree thresholds of for the
existence of a -decomposition, and of for the existence of a
-decomposition, where . Our main contribution is a
general `iterative absorption' method which turns an approximate or fractional
decomposition into an exact one. In particular, our results imply that in order
to prove an asymptotic version of Nash-Williams' conjecture, it suffices to
show that every -divisible graph with minimum degree at least
has an approximate -decomposition,Comment: 41 pages. This version includes some minor corrections, updates and
improvement
Linear Time Subgraph Counting, Graph Degeneracy, and the Chasm at Size Six
We consider the problem of counting all k-vertex subgraphs in an input graph, for any constant k. This problem (denoted SUB-CNT_k) has been studied extensively in both theory and practice. In a classic result, Chiba and Nishizeki (SICOMP 85) gave linear time algorithms for clique and 4-cycle counting for bounded degeneracy graphs. This is a rich class of sparse graphs that contains, for example, all minor-free families and preferential attachment graphs. The techniques from this result have inspired a number of recent practical algorithms for SUB-CNT_k. Towards a better understanding of the limits of these techniques, we ask: for what values of k can SUB_CNT_k be solved in linear time?
We discover a chasm at k=6. Specifically, we prove that for k < 6, SUB_CNT_k can be solved in linear time. Assuming a standard conjecture in fine-grained complexity, we prove that for all k ? 6, SUB-CNT_k cannot be solved even in near-linear time
On the decomposition threshold of a given graph
We study the -decomposition threshold for a given graph .
Here an -decomposition of a graph is a collection of edge-disjoint
copies of in which together cover every edge of . (Such an
-decomposition can only exist if is -divisible, i.e. if and each vertex degree of can be expressed as a linear combination of
the vertex degrees of .)
The -decomposition threshold is the smallest value ensuring
that an -divisible graph on vertices with
has an -decomposition. Our main results imply
the following for a given graph , where is the fractional
version of and :
(i) ;
(ii) if , then
;
(iii) we determine if is bipartite.
In particular, (i) implies that . Our proof
involves further developments of the recent `iterative' absorbing approach.Comment: Final version, to appear in the Journal of Combinatorial Theory,
Series
- …