1,827 research outputs found

    Cut Size Statistics of Graph Bisection Heuristics

    Full text link
    We investigate the statistical properties of cut sizes generated by heuristic algorithms which solve approximately the graph bisection problem. On an ensemble of sparse random graphs, we find empirically that the distribution of the cut sizes found by ``local'' algorithms becomes peaked as the number of vertices in the graphs becomes large. Evidence is given that this distribution tends towards a Gaussian whose mean and variance scales linearly with the number of vertices of the graphs. Given the distribution of cut sizes associated with each heuristic, we provide a ranking procedure which takes into account both the quality of the solutions and the speed of the algorithms. This procedure is demonstrated for a selection of local graph bisection heuristics.Comment: 17 pages, 5 figures, submitted to SIAM Journal on Optimization also available at http://ipnweb.in2p3.fr/~martin

    The minimum bisection in the planted bisection model

    Get PDF
    In the planted bisection model a random graph G(n,p+,p)G(n,p_+,p_- ) with nn vertices is created by partitioning the vertices randomly into two classes of equal size (up to ±1\pm1). Any two vertices that belong to the same class are linked by an edge with probability p+p_+ and any two that belong to different classes with probability p<p+p_- <p_+ independently. The planted bisection model has been used extensively to benchmark graph partitioning algorithms. If p±=2d±/np_{\pm} =2d_{\pm} /n for numbers 0d<d+0\leq d_- <d_+ that remain fixed as nn\to\infty, then w.h.p. the ``planted'' bisection (the one used to construct the graph) will not be a minimum bisection. In this paper we derive an asymptotic formula for the minimum bisection width under the assumption that d+d>cd+lnd+d_+ -d_- >c\sqrt{d_+ \ln d_+ } for a certain constant c>0c>0

    An FPT 2-Approximation for Tree-Cut Decomposition

    Full text link
    The tree-cut width of a graph is a graph parameter defined by Wollan [J. Comb. Theory, Ser. B, 110:47-66, 2015] with the help of tree-cut decompositions. In certain cases, tree-cut width appears to be more adequate than treewidth as an invariant that, when bounded, can accelerate the resolution of intractable problems. While designing algorithms for problems with bounded tree-cut width, it is important to have a parametrically tractable way to compute the exact value of this parameter or, at least, some constant approximation of it. In this paper we give a parameterized 2-approximation algorithm for the computation of tree-cut width; for an input nn-vertex graph GG and an integer ww, our algorithm either confirms that the tree-cut width of GG is more than ww or returns a tree-cut decomposition of GG certifying that its tree-cut width is at most 2w2w, in time 2O(w2logw)n22^{O(w^2\log w)} \cdot n^2. Prior to this work, no constructive parameterized algorithms, even approximated ones, existed for computing the tree-cut width of a graph. As a consequence of the Graph Minors series by Robertson and Seymour, only the existence of a decision algorithm was known.Comment: 17 pages, 3 figure

    Significant Subgraph Mining with Multiple Testing Correction

    Full text link
    The problem of finding itemsets that are statistically significantly enriched in a class of transactions is complicated by the need to correct for multiple hypothesis testing. Pruning untestable hypotheses was recently proposed as a strategy for this task of significant itemset mining. It was shown to lead to greater statistical power, the discovery of more truly significant itemsets, than the standard Bonferroni correction on real-world datasets. An open question, however, is whether this strategy of excluding untestable hypotheses also leads to greater statistical power in subgraph mining, in which the number of hypotheses is much larger than in itemset mining. Here we answer this question by an empirical investigation on eight popular graph benchmark datasets. We propose a new efficient search strategy, which always returns the same solution as the state-of-the-art approach and is approximately two orders of magnitude faster. Moreover, we exploit the dependence between subgraphs by considering the effective number of tests and thereby further increase the statistical power.Comment: 18 pages, 5 figure, accepted to the 2015 SIAM International Conference on Data Mining (SDM15

    On Semidefinite Programming Relaxations of Association Schemes With Application to Combinatorial Optimization Problems

    Get PDF
    AMS classification: 90C22, 20Cxx, 70-08traveling salesman problem;maximum bisection;semidefinite programming;association schemes

    On the complexity of computing the kk-restricted edge-connectivity of a graph

    Full text link
    The \emph{kk-restricted edge-connectivity} of a graph GG, denoted by λk(G)\lambda_k(G), is defined as the minimum size of an edge set whose removal leaves exactly two connected components each containing at least kk vertices. This graph invariant, which can be seen as a generalization of a minimum edge-cut, has been extensively studied from a combinatorial point of view. However, very little is known about the complexity of computing λk(G)\lambda_k(G). Very recently, in the parameterized complexity community the notion of \emph{good edge separation} of a graph has been defined, which happens to be essentially the same as the kk-restricted edge-connectivity. Motivated by the relevance of this invariant from both combinatorial and algorithmic points of view, in this article we initiate a systematic study of its computational complexity, with special emphasis on its parameterized complexity for several choices of the parameters. We provide a number of NP-hardness and W[1]-hardness results, as well as FPT-algorithms.Comment: 16 pages, 4 figure
    corecore