307 research outputs found

    Heuristics for Sparsest Cut Approximations in Network Flow Applications

    Get PDF
    The Maximum Concurrent Flow Problem (MCFP) is a polynomially bounded problem that has been used over the years in a variety of applications. Sometimes it is used to attempt to find the Sparsest Cut, an NP-hard problem, and other times to find communities in Social Network Analysis (SNA) in its hierarchical formulation, the HMCFP. Though it is polynomially bounded, the MCFP quickly grows in space utilization, rendering it useful on only small problems. When it was defined, only a few hundred nodes could be solved, where a few decades later, graphs of one to two thousand nodes can still be too much for modern commodity hardware to handle. This dissertation covers three approaches to heuristics to the MCFP that run significantly faster in practice than the LP formulation with far less memory utilization. The first two approaches are based on the Maximum Adjacency Search (MAS) and apply to both the MCFP and the HMCFP used for community detection. We compare the three approaches to the LP performance in terms of accuracy, runtime, and memory utilization on several classes of synthetic graphs representing potential real-world applications. We find that the heuristics are often correct, and run using orders of magnitude less memory and time

    On Brambles, Grid-Like Minors, and Parameterized Intractability of Monadic Second-Order Logic

    Full text link
    Brambles were introduced as the dual notion to treewidth, one of the most central concepts of the graph minor theory of Robertson and Seymour. Recently, Grohe and Marx showed that there are graphs G, in which every bramble of order larger than the square root of the treewidth is of exponential size in |G|. On the positive side, they show the existence of polynomial-sized brambles of the order of the square root of the treewidth, up to log factors. We provide the first polynomial time algorithm to construct a bramble in general graphs and achieve this bound, up to log-factors. We use this algorithm to construct grid-like minors, a replacement structure for grid-minors recently introduced by Reed and Wood, in polynomial time. Using the grid-like minors, we introduce the notion of a perfect bramble and an algorithm to find one in polynomial time. Perfect brambles are brambles with a particularly simple structure and they also provide us with a subgraph that has bounded degree and still large treewidth; we use them to obtain a meta-theorem on deciding certain parameterized subgraph-closed problems on general graphs in time singly exponential in the parameter. The second part of our work deals with providing a lower bound to Courcelle's famous theorem, stating that every graph property that can be expressed by a sentence in monadic second-order logic (MSO), can be decided by a linear time algorithm on classes of graphs of bounded treewidth. Using our results from the first part of our work we establish a strong lower bound for tractability of MSO on classes of colored graphs

    Combinatorial Fiedler Theory and Graph Partition

    Full text link
    Partition problems in graphs are extremely important in applications, as shown in the Data science and Machine learning literature. One approach is spectral partitioning based on a Fiedler vector, i.e., an eigenvector corresponding to the second smallest eigenvalue a(G)a(G) of the Laplacian matrix LGL_G of the graph GG. This problem corresponds to the minimization of a quadratic form associated with LGL_G, under certain constraints involving the 2\ell_2-norm. We introduce and investigate a similar problem, but using the 1\ell_1-norm to measure distances. This leads to a new parameter b(G)b(G) as the optimal value. We show that a well-known cut problem arises in this approach, namely the sparsest cut problem. We prove connectivity results and different bounds on this new parameter, relate to Fiedler theory and show explicit expressions for b(G)b(G) for trees. We also comment on an \ell_{\infty}-norm version of the problem

    Integrality gaps of semidefinite programs for Vertex Cover and relations to 1\ell_1 embeddability of Negative Type metrics

    Get PDF
    We study various SDP formulations for {\sc Vertex Cover} by adding different constraints to the standard formulation. We show that {\sc Vertex Cover} cannot be approximated better than 2o(1)2-o(1) even when we add the so called pentagonal inequality constraints to the standard SDP formulation, en route answering an open question of Karakostas~\cite{Karakostas}. We further show the surprising fact that by strengthening the SDP with the (intractable) requirement that the metric interpretation of the solution is an 1\ell_1 metric, we get an exact relaxation (integrality gap is 1), and on the other hand if the solution is arbitrarily close to being 1\ell_1 embeddable, the integrality gap may be as big as 2o(1)2-o(1). Finally, inspired by the above findings, we use ideas from the integrality gap construction of Charikar \cite{Char02} to provide a family of simple examples for negative type metrics that cannot be embedded into 1\ell_1 with distortion better than 8/7-\eps. To this end we prove a new isoperimetric inequality for the hypercube.Comment: A more complete version. Changed order of results. A complete proof of (current) Theorem

    Connectivity Oracles for Graphs Subject to Vertex Failures

    Full text link
    We introduce new data structures for answering connectivity queries in graphs subject to batched vertex failures. A deterministic structure processes a batch of ddd\leq d_{\star} failed vertices in O~(d3)\tilde{O}(d^3) time and thereafter answers connectivity queries in O(d)O(d) time. It occupies space O(dmlogn)O(d_{\star} m\log n). We develop a randomized Monte Carlo version of our data structure with update time O~(d2)\tilde{O}(d^2), query time O(d)O(d), and space O~(m)\tilde{O}(m) for any failure bound dnd\le n. This is the first connectivity oracle for general graphs that can efficiently deal with an unbounded number of vertex failures. We also develop a more efficient Monte Carlo edge-failure connectivity oracle. Using space O(nlog2n)O(n\log^2 n), dd edge failures are processed in O(dlogdloglogn)O(d\log d\log\log n) time and thereafter, connectivity queries are answered in O(loglogn)O(\log\log n) time, which are correct w.h.p. Our data structures are based on a new decomposition theorem for an undirected graph G=(V,E)G=(V,E), which is of independent interest. It states that for any terminal set UVU\subseteq V we can remove a set BB of U/(s2)|U|/(s-2) vertices such that the remaining graph contains a Steiner forest for UBU-B with maximum degree ss

    On the Parameterized Complexity of Sparsest Cut and Small-set Expansion Problems

    Full text link
    We study the NP-hard \textsc{kk-Sparsest Cut} problem (kkSC) in which, given an undirected graph G=(V,E)G = (V, E) and a parameter kk, the objective is to partition vertex set into kk subsets whose maximum edge expansion is minimized. Herein, the edge expansion of a subset SVS \subseteq V is defined as the sum of the weights of edges exiting SS divided by the number of vertices in SS. Another problem that has been investigated is \textsc{kk-Small-Set Expansion} problem (kkSSE), which aims to find a subset with minimum edge expansion with a restriction on the size of the subset. We extend previous studies on kkSC and kkSSE by inspecting their parameterized complexity. On the positive side, we present two FPT algorithms for both kkSSE and 2SC problems where in the first algorithm we consider the parameter treewidth of the input graph and uses exponential space, and in the second we consider the parameter vertex cover number of the input graph and uses polynomial space. Moreover, we consider the unweighted version of the kkSC problem where k2k \geq 2 is fixed and proposed two FPT algorithms with parameters treewidth and vertex cover number of the input graph. We also propose a randomized FPT algorithm for kkSSE when parameterized by kk and the maximum degree of the input graph combined. Its derandomization is done efficiently. \noindent On the negative side, first we prove that for every fixed integer k,τ3k,\tau\geq 3, the problem kkSC is NP-hard for graphs with vertex cover number at most τ\tau. We also show that kkSC is W[1]-hard when parameterized by the treewidth of the input graph and the number~kk of components combined using a reduction from \textsc{Unary Bin Packing}. Furthermore, we prove that kkSC remains NP-hard for graphs with maximum degree three and also graphs with degeneracy two. Finally, we prove that the unweighted kkSSE is W[1]-hard for the parameter kk

    Compression bounds for Lipschitz maps from the Heisenberg group to L1L_1

    Full text link
    We prove a quantitative bi-Lipschitz nonembedding theorem for the Heisenberg group with its Carnot-Carath\'eodory metric and apply it to give a lower bound on the integrality gap of the Goemans-Linial semidefinite relaxation of the Sparsest Cut problem
    corecore