10,100 research outputs found

    Largest reduced neighborhood clique cover number revisited

    Full text link
    Let GG be a graph and t0t\ge 0. The largest reduced neighborhood clique cover number of GG, denoted by β^t(G){\hat\beta}_t(G), is the largest, overall tt-shallow minors HH of GG, of the smallest number of cliques that can cover any closed neighborhood of a vertex in HH. It is known that β^t(G)st{\hat\beta}_t(G)\le s_t, where GG is an incomparability graph and sts_t is the number of leaves in a largest tt-shallow minor which is isomorphic to an induced star on sts_t leaves. In this paper we give an overview of the properties of β^t(G){\hat\beta}_t(G) including the connections to the greatest reduced average density of GG, or t(G)\bigtriangledown_t(G), introduce the class of graphs with bounded neighborhood clique cover number, and derive a simple lower and an upper bound for this important graph parameter. We announce two conjectures, one for the value of β^t(G){\hat\beta}_t(G), and another for a separator theorem (with respect to a certain measure) for an interesting class of graphs, namely the class of incomparability graphs which we suspect to have a polynomial bounded neighborhood clique cover number, when the size of a largest induced star is bounded.Comment: The results in this paper were presented in 48th Southeastern Conference in Combinatorics, Graph Theory and Computing, Florida Atlantic University, Boca Raton, March 201

    Structural Rounding: Approximation Algorithms for Graphs Near an Algorithmically Tractable Class

    Get PDF
    We develop a framework for generalizing approximation algorithms from the structural graph algorithm literature so that they apply to graphs somewhat close to that class (a scenario we expect is common when working with real-world networks) while still guaranteeing approximation ratios. The idea is to edit a given graph via vertex- or edge-deletions to put the graph into an algorithmically tractable class, apply known approximation algorithms for that class, and then lift the solution to apply to the original graph. We give a general characterization of when an optimization problem is amenable to this approach, and show that it includes many well-studied graph problems, such as Independent Set, Vertex Cover, Feedback Vertex Set, Minimum Maximal Matching, Chromatic Number, (l-)Dominating Set, Edge (l-)Dominating Set, and Connected Dominating Set. To enable this framework, we develop new editing algorithms that find the approximately-fewest edits required to bring a given graph into one of a few important graph classes (in some cases these are bicriteria algorithms which simultaneously approximate both the number of editing operations and the target parameter of the family). For bounded degeneracy, we obtain an O(r log{n})-approximation and a bicriteria (4,4)-approximation which also extends to a smoother bicriteria trade-off. For bounded treewidth, we obtain a bicriteria (O(log^{1.5} n), O(sqrt{log w}))-approximation, and for bounded pathwidth, we obtain a bicriteria (O(log^{1.5} n), O(sqrt{log w} * log n))-approximation. For treedepth 2 (related to bounded expansion), we obtain a 4-approximation. We also prove complementary hardness-of-approximation results assuming P != NP: in particular, these problems are all log-factor inapproximable, except the last which is not approximable below some constant factor 2 (assuming UGC)

    Phase Transitions of the Typical Algorithmic Complexity of the Random Satisfiability Problem Studied with Linear Programming

    Full text link
    Here we study the NP-complete KK-SAT problem. Although the worst-case complexity of NP-complete problems is conjectured to be exponential, there exist parametrized random ensembles of problems where solutions can typically be found in polynomial time for suitable ranges of the parameter. In fact, random KK-SAT, with α=M/N\alpha=M/N as control parameter, can be solved quickly for small enough values of α\alpha. It shows a phase transition between a satisfiable phase and an unsatisfiable phase. For branch and bound algorithms, which operate in the space of feasible Boolean configurations, the empirically hardest problems are located only close to this phase transition. Here we study KK-SAT (K=3,4K=3,4) and the related optimization problem MAX-SAT by a linear programming approach, which is widely used for practical problems and allows for polynomial run time. In contrast to branch and bound it operates outside the space of feasible configurations. On the other hand, finding a solution within polynomial time is not guaranteed. We investigated several variants like including artificial objective functions, so called cutting-plane approaches, and a mapping to the NP-complete vertex-cover problem. We observed several easy-hard transitions, from where the problems are typically solvable (in polynomial time) using the given algorithms, respectively, to where they are not solvable in polynomial time. For the related vertex-cover problem on random graphs these easy-hard transitions can be identified with structural properties of the graphs, like percolation transitions. For the present random KK-SAT problem we have investigated numerous structural properties also exhibiting clear transitions, but they appear not be correlated to the here observed easy-hard transitions. This renders the behaviour of random KK-SAT more complex than, e.g., the vertex-cover problem.Comment: 11 pages, 5 figure

    Edge-decompositions of graphs with high minimum degree

    Get PDF
    A fundamental theorem of Wilson states that, for every graph FF, every sufficiently large FF-divisible clique has an FF-decomposition. Here a graph GG is FF-divisible if e(F)e(F) divides e(G)e(G) and the greatest common divisor of the degrees of FF divides the greatest common divisor of the degrees of GG, and GG has an FF-decomposition if the edges of GG can be covered by edge-disjoint copies of FF. We extend this result to graphs GG which are allowed to be far from complete. In particular, together with a result of Dross, our results imply that every sufficiently large K3K_3-divisible graph of minimum degree at least 9n/10+o(n)9n/10+o(n) has a K3K_3-decomposition. This significantly improves previous results towards the long-standing conjecture of Nash-Williams that every sufficiently large K3K_3-divisible graph with minimum degree at least 3n/43n/4 has a K3K_3-decomposition. We also obtain the asymptotically correct minimum degree thresholds of 2n/3+o(n)2n/3 +o(n) for the existence of a C4C_4-decomposition, and of n/2+o(n)n/2+o(n) for the existence of a C2C_{2\ell}-decomposition, where 3\ell\ge 3. Our main contribution is a general `iterative absorption' method which turns an approximate or fractional decomposition into an exact one. In particular, our results imply that in order to prove an asymptotic version of Nash-Williams' conjecture, it suffices to show that every K3K_3-divisible graph with minimum degree at least 3n/4+o(n)3n/4+o(n) has an approximate K3K_3-decomposition,Comment: 41 pages. This version includes some minor corrections, updates and improvement

    Linear Time Subgraph Counting, Graph Degeneracy, and the Chasm at Size Six

    Get PDF
    We consider the problem of counting all k-vertex subgraphs in an input graph, for any constant k. This problem (denoted SUB-CNT_k) has been studied extensively in both theory and practice. In a classic result, Chiba and Nishizeki (SICOMP 85) gave linear time algorithms for clique and 4-cycle counting for bounded degeneracy graphs. This is a rich class of sparse graphs that contains, for example, all minor-free families and preferential attachment graphs. The techniques from this result have inspired a number of recent practical algorithms for SUB-CNT_k. Towards a better understanding of the limits of these techniques, we ask: for what values of k can SUB_CNT_k be solved in linear time? We discover a chasm at k=6. Specifically, we prove that for k < 6, SUB_CNT_k can be solved in linear time. Assuming a standard conjecture in fine-grained complexity, we prove that for all k ? 6, SUB-CNT_k cannot be solved even in near-linear time

    On the decomposition threshold of a given graph

    Get PDF
    We study the FF-decomposition threshold δF\delta_F for a given graph FF. Here an FF-decomposition of a graph GG is a collection of edge-disjoint copies of FF in GG which together cover every edge of GG. (Such an FF-decomposition can only exist if GG is FF-divisible, i.e. if e(F)e(G)e(F)\mid e(G) and each vertex degree of GG can be expressed as a linear combination of the vertex degrees of FF.) The FF-decomposition threshold δF\delta_F is the smallest value ensuring that an FF-divisible graph GG on nn vertices with δ(G)(δF+o(1))n\delta(G)\ge(\delta_F+o(1))n has an FF-decomposition. Our main results imply the following for a given graph FF, where δF\delta_F^\ast is the fractional version of δF\delta_F and χ:=χ(F)\chi:=\chi(F): (i) δFmax{δF,11/(χ+1)}\delta_F\le \max\{\delta_F^\ast,1-1/(\chi+1)\}; (ii) if χ5\chi\ge 5, then δF{δF,11/χ,11/(χ+1)}\delta_F\in\{\delta_F^{\ast},1-1/\chi,1-1/(\chi+1)\}; (iii) we determine δF\delta_F if FF is bipartite. In particular, (i) implies that δKr=δKr\delta_{K_r}=\delta^\ast_{K_r}. Our proof involves further developments of the recent `iterative' absorbing approach.Comment: Final version, to appear in the Journal of Combinatorial Theory, Series
    corecore