196 research outputs found

    Simple parallel and distributed algorithms for spectral graph sparsification

    Full text link
    We describe a simple algorithm for spectral graph sparsification, based on iterative computations of weighted spanners and uniform sampling. Leveraging the algorithms of Baswana and Sen for computing spanners, we obtain the first distributed spectral sparsification algorithm. We also obtain a parallel algorithm with improved work and time guarantees. Combining this algorithm with the parallel framework of Peng and Spielman for solving symmetric diagonally dominant linear systems, we get a parallel solver which is much closer to being practical and significantly more efficient in terms of the total work.Comment: replaces "A simple parallel and distributed algorithm for spectral sparsification". Minor change

    Towards Resistance Sparsifiers

    Get PDF
    We study resistance sparsification of graphs, in which the goal is to find a sparse subgraph (with reweighted edges) that approximately preserves the effective resistances between every pair of nodes. We show that every dense regular expander admits a (1+ϵ)(1+\epsilon)-resistance sparsifier of size O~(n/ϵ)\tilde O(n/\epsilon), and conjecture this bound holds for all graphs on nn nodes. In comparison, spectral sparsification is a strictly stronger notion and requires Ω(n/ϵ2)\Omega(n/\epsilon^2) edges even on the complete graph. Our approach leads to the following structural question on graphs: Does every dense regular expander contain a sparse regular expander as a subgraph? Our main technical contribution, which may of independent interest, is a positive answer to this question in a certain setting of parameters. Combining this with a recent result of von Luxburg, Radl, and Hein~(JMLR, 2014) leads to the aforementioned resistance sparsifiers

    Quantum Speedup for Graph Sparsification, Cut Approximation and Laplacian Solving

    Full text link
    Graph sparsification underlies a large number of algorithms, ranging from approximation algorithms for cut problems to solvers for linear systems in the graph Laplacian. In its strongest form, "spectral sparsification" reduces the number of edges to near-linear in the number of nodes, while approximately preserving the cut and spectral structure of the graph. In this work we demonstrate a polynomial quantum speedup for spectral sparsification and many of its applications. In particular, we give a quantum algorithm that, given a weighted graph with nn nodes and mm edges, outputs a classical description of an ϵ\epsilon-spectral sparsifier in sublinear time O~(mn/ϵ)\tilde{O}(\sqrt{mn}/\epsilon). This contrasts with the optimal classical complexity O~(m)\tilde{O}(m). We also prove that our quantum algorithm is optimal up to polylog-factors. The algorithm builds on a string of existing results on sparsification, graph spanners, quantum algorithms for shortest paths, and efficient constructions for kk-wise independent random strings. Our algorithm implies a quantum speedup for solving Laplacian systems and for approximating a range of cut problems such as min cut and sparsest cut.Comment: v2: several small improvements to the text. An extended abstract will appear in FOCS'20; v3: corrected a minor mistake in Appendix

    A Matrix Hyperbolic Cosine Algorithm and Applications

    Full text link
    In this paper, we generalize Spencer's hyperbolic cosine algorithm to the matrix-valued setting. We apply the proposed algorithm to several problems by analyzing its computational efficiency under two special cases of matrices; one in which the matrices have a group structure and an other in which they have rank-one. As an application of the former case, we present a deterministic algorithm that, given the multiplication table of a finite group of size nn, it constructs an expanding Cayley graph of logarithmic degree in near-optimal O(n^2 log^3 n) time. For the latter case, we present a fast deterministic algorithm for spectral sparsification of positive semi-definite matrices, which implies an improved deterministic algorithm for spectral graph sparsification of dense graphs. In addition, we give an elementary connection between spectral sparsification of positive semi-definite matrices and element-wise matrix sparsification. As a consequence, we obtain improved element-wise sparsification algorithms for diagonally dominant-like matrices.Comment: 16 pages, simplified proof and corrected acknowledging of prior work in (current) Section

    On Fully Dynamic Graph Sparsifiers

    No full text
    We initiate the study of dynamic algorithms for graph sparsification problems and obtain fully dynamic algorithms, allowing both edge insertions and edge deletions, that take polylogarithmic time after each update in the graph. Our three main results are as follows. First, we give a fully dynamic algorithm for maintaining a (1±ϵ) (1 \pm \epsilon) -spectral sparsifier with amortized update time poly(logn,ϵ1)poly(\log{n}, \epsilon^{-1}). Second, we give a fully dynamic algorithm for maintaining a (1±ϵ) (1 \pm \epsilon) -cut sparsifier with \emph{worst-case} update time poly(logn,ϵ1)poly(\log{n}, \epsilon^{-1}). Both sparsifiers have size npoly(logn,ϵ1) n \cdot poly(\log{n}, \epsilon^{-1}). Third, we apply our dynamic sparsifier algorithm to obtain a fully dynamic algorithm for maintaining a (1+ϵ)(1 + \epsilon)-approximation to the value of the maximum flow in an unweighted, undirected, bipartite graph with amortized update time poly(logn,ϵ1)poly(\log{n}, \epsilon^{-1})

    The Graph Lottery Ticket Hypothesis: Finding Sparse, Informative Graph Structure

    Full text link
    Graph learning methods help utilize implicit relationships among data items, thereby reducing training label requirements and improving task performance. However, determining the optimal graph structure for a particular learning task remains a challenging research problem. In this work, we introduce the Graph Lottery Ticket (GLT) Hypothesis - that there is an extremely sparse backbone for every graph, and that graph learning algorithms attain comparable performance when trained on that subgraph as on the full graph. We identify and systematically study 8 key metrics of interest that directly influence the performance of graph learning algorithms. Subsequently, we define the notion of a "winning ticket" for graph structure - an extremely sparse subset of edges that can deliver a robust approximation of the entire graph's performance. We propose a straightforward and efficient algorithm for finding these GLTs in arbitrary graphs. Empirically, we observe that performance of different graph learning algorithms can be matched or even exceeded on graphs with the average degree as low as 5

    On Constructing Spanners from Random Gaussian Projections

    Get PDF
    Graph sketching is a powerful paradigm for analyzing graph structure via linear measurements introduced by Ahn, Guha, and McGregor (SODA\u2712) that has since found numerous applications in streaming, distributed computing, and massively parallel algorithms, among others. Graph sketching has proven to be quite successful for various problems such as connectivity, minimum spanning trees, edge or vertex connectivity, and cut or spectral sparsifiers. Yet, the problem of approximating shortest path metric of a graph, and specifically computing a spanner, is notably missing from the list of successes. This has turned the status of this fundamental problem into one of the most longstanding open questions in this area. We present a partial explanation of this lack of success by proving a strong lower bound for a large family of graph sketching algorithms that encompasses prior work on spanners and many (but importantly not also all) related cut-based problems mentioned above. Our lower bound matches the algorithmic bounds of the recent result of Filtser, Kapralov, and Nouri (SODA\u2721), up to lower order terms, for constructing spanners via the same graph sketching family. This establishes near-optimality of these bounds, at least restricted to this family of graph sketching techniques, and makes progress on a conjecture posed in this latter work
    corecore