21 research outputs found

    A New Regularity Lemma and Faster Approximation Algorithms for Low Threshold Rank Graphs

    Full text link
    Kolla and Tulsiani [KT07,Kolla11} and Arora, Barak and Steurer [ABS10] introduced the technique of subspace enumeration, which gives approximation algorithms for graph problems such as unique games and small set expansion; the running time of such algorithms is exponential in the threshold-rank of the graph. Guruswami and Sinop [GS11,GS12], and Barak, Raghavendra, and Steurer [BRS11] developed an alternative approach to the design of approximation algorithms for graphs of bounded threshold-rank, based on semidefinite programming relaxations in the Lassere hierarchy and on novel rounding techniques. These algorithms are faster than the ones based on subspace enumeration and work on a broad class of problems. In this paper we develop a third approach to the design of such algorithms. We show, constructively, that graphs of bounded threshold-rank satisfy a weak Szemeredi regularity lemma analogous to the one proved by Frieze and Kannan [FK99] for dense graphs. The existence of efficient approximation algorithms is then a consequence of the regularity lemma, as shown by Frieze and Kannan. Applying our method to the Max Cut problem, we devise an algorithm that is faster than all previous algorithms, and is easier to describe and analyze

    Combining spectral sequencing and parallel simulated annealing for the MinLA problem

    Get PDF
    In this paper we present and analyze new sequential and parallel heuristics to approximate the Minimum Linear Arrangement problem (MinLA). The heuristics consist in obtaining a first global solution using Spectral Sequencing and improving it locally through Simulated Annealing. In order to accelerate the annealing process, we present a special neighborhood distribution that tends to favor moves with high probability to be accepted. We show how to make use of this neighborhood to parallelize the Metropolis stage on distributed memory machines by mapping partitions of the input graph to processors and performing moves concurrently. The paper reports the results obtained with this new heuristic when applied to a set of large graphs, including graphs arising from finite elements methods and graphs arising from VLSI applications. Compared to other heuristics, the measurements obtained show that the new heuristic improves the solution quality, decreases the running time and offers an excellent speedup when ran on a commodity network made of nine personal computers.Postprint (published version

    Partitioning problems in dense hypergraphs

    Get PDF
    AbstractWe study the general partitioning problem and the discrepancy problem in dense hypergraphs. Using the regularity lemma (Szemerédi, Problemes Combinatories et Theorie des Graphes (1978), pp. 399–402) and its algorithmic version proved in Czygrinow and Rödl (SIAM J. Comput., to appear), we give polynomial-time approximation schemes for the general partitioning problem and for the discrepancy problem

    Streaming Lower Bounds for Approximating MAX-CUT

    Full text link
    We consider the problem of estimating the value of max cut in a graph in the streaming model of computation. At one extreme, there is a trivial 22-approximation for this problem that uses only O(logn)O(\log n) space, namely, count the number of edges and output half of this value as the estimate for max cut value. On the other extreme, if one allows O~(n)\tilde{O}(n) space, then a near-optimal solution to the max cut value can be obtained by storing an O~(n)\tilde{O}(n)-size sparsifier that essentially preserves the max cut. An intriguing question is if poly-logarithmic space suffices to obtain a non-trivial approximation to the max-cut value (that is, beating the factor 22). It was recently shown that the problem of estimating the size of a maximum matching in a graph admits a non-trivial approximation in poly-logarithmic space. Our main result is that any streaming algorithm that breaks the 22-approximation barrier requires Ω~(n)\tilde{\Omega}(\sqrt{n}) space even if the edges of the input graph are presented in random order. Our result is obtained by exhibiting a distribution over graphs which are either bipartite or 12\frac{1}{2}-far from being bipartite, and establishing that Ω~(n)\tilde{\Omega}(\sqrt{n}) space is necessary to differentiate between these two cases. Thus as a direct corollary we obtain that Ω~(n)\tilde{\Omega}(\sqrt{n}) space is also necessary to test if a graph is bipartite or 12\frac{1}{2}-far from being bipartite. We also show that for any ϵ>0\epsilon > 0, any streaming algorithm that obtains a (1+ϵ)(1 + \epsilon)-approximation to the max cut value when edges arrive in adversarial order requires n1O(ϵ)n^{1 - O(\epsilon)} space, implying that Ω(n)\Omega(n) space is necessary to obtain an arbitrarily good approximation to the max cut value

    Graph removal lemmas

    Get PDF
    The graph removal lemma states that any graph on n vertices with o(n^{v(H)}) copies of a fixed graph H may be made H-free by removing o(n^2) edges. Despite its innocent appearance, this lemma and its extensions have several important consequences in number theory, discrete geometry, graph theory and computer science. In this survey we discuss these lemmas, focusing in particular on recent improvements to their quantitative aspects.Comment: 35 page

    On the Complexity of Newman's Community Finding Approach for Biological and Social Networks

    Full text link
    Given a graph of interactions, a module (also called a community or cluster) is a subset of nodes whose fitness is a function of the statistical significance of the pairwise interactions of nodes in the module. The topic of this paper is a model-based community finding approach, commonly referred to as modularity clustering, that was originally proposed by Newman and has subsequently been extremely popular in practice. Various heuristic methods are currently employed for finding the optimal solution. However, the exact computational complexity of this approach is still largely unknown. To this end, we initiate a systematic study of the computational complexity of modularity clustering. Due to the specific quadratic nature of the modularity function, it is necessary to study its value on sparse graphs and dense graphs separately. Our main results include a (1+\eps)-inapproximability for dense graphs and a logarithmic approximation for sparse graphs. We make use of several combinatorial properties of modularity to get these results. These are the first non-trivial approximability results beyond the previously known NP-hardness results.Comment: Journal of Computer and System Sciences, 201

    The algebraic structure of the densification and the sparsification tasks for CSPs

    Full text link
    The tractability of certain CSPs for dense or sparse instances is known from the 90s. Recently, the densification and the sparsification of CSPs were formulated as computational tasks and the systematical study of their computational complexity was initiated. We approach this problem by introducing the densification operator, i.e. the closure operator that, given an instance of a CSP, outputs all constraints that are satisfied by all of its solutions. According to the Galois theory of closure operators, any such operator is related to a certain implicational system (or, a functional dependency) Σ\Sigma. We are specifically interested in those classes of fixed-template CSPs, parameterized by constraint languages Γ\Gamma, for which the size of an implicational system Σ\Sigma is a polynomial in the number of variables nn. We show that in the Boolean case, Σ\Sigma is of polynomial size if and only if Γ\Gamma is of bounded width. For such languages, Σ\Sigma can be computed in log-space or in a logarithmic time with a polynomial number of processors. Given an implicational system Σ\Sigma, the densification task is equivalent to the computation of the closure of input constraints. The sparsification task is equivalent to the computation of the minimal key. This leads to O(poly(n)N2){\mathcal O}({\rm poly}(n)\cdot N^2)-algorithm for the sparsification task where NN is the number of non-redundant sparsifications of an original CSP. Finally, we give a complete classification of constraint languages over the Boolean domain for which the densification problem is tractable

    Limits of Ordered Graphs and their Applications

    Full text link
    The emerging theory of graph limits exhibits an analytic perspective on graphs, showing that many important concepts and tools in graph theory and its applications can be described more naturally (and sometimes proved more easily) in analytic language. We extend the theory of graph limits to the ordered setting, presenting a limit object for dense vertex-ordered graphs, which we call an \emph{orderon}. As a special case, this yields limit objects for matrices whose rows and columns are ordered, and for dynamic graphs that expand (via vertex insertions) over time. Along the way, we devise an ordered locality-preserving variant of the cut distance between ordered graphs, showing that two graphs are close with respect to this distance if and only if they are similar in terms of their ordered subgraph frequencies. We show that the space of orderons is compact with respect to this distance notion, which is key to a successful analysis of combinatorial objects through their limits. We derive several applications of the ordered limit theory in extremal combinatorics, sampling, and property testing in ordered graphs. In particular, we prove a new ordered analogue of the well-known result by Alon and Stav [RS\&A'08] on the furthest graph from a hereditary property; this is the first known result of this type in the ordered setting. Unlike the unordered regime, here the random graph model G(n,p)G(n, p) with an ordering over the vertices is \emph{not} always asymptotically the furthest from the property for some pp. However, using our ordered limit theory, we show that random graphs generated by a stochastic block model, where the blocks are consecutive in the vertex ordering, are (approximately) the furthest. Additionally, we describe an alternative analytic proof of the ordered graph removal lemma [Alon et al., FOCS'17].Comment: Added a new application: An Alon-Stav type result on the furthest ordered graph from a hereditary property; Fixed and extended proof sketch of the removal lemma applicatio
    corecore