14 research outputs found

    An Efficient Parallel Algorithm for Spectral Sparsification of Laplacian and SDDM Matrix Polynomials

    Full text link
    For "large" class C\mathcal{C} of continuous probability density functions (p.d.f.), we demonstrate that for every wCw\in\mathcal{C} there is mixture of discrete Binomial distributions (MDBD) with TNϕw/δT\geq N\sqrt{\phi_{w}/\delta} distinct Binomial distributions B(,N)B(\cdot,N) that δ\delta-approximates a discretized p.d.f. w^(i/N)w(i/N)/[=0Nw(/N)]\widehat{w}(i/N)\triangleq w(i/N)/[\sum_{\ell=0}^{N}w(\ell/N)] for all i[3:N3]i\in[3:N-3], where ϕwmaxx[0,1]w(x)\phi_{w}\geq\max_{x\in[0,1]}|w(x)|. Also, we give two efficient parallel algorithms to find such MDBD. Moreover, we propose a sequential algorithm that on input MDBD with N=2kN=2^k for kN+k\in\mathbb{N}_{+} that induces a discretized p.d.f. β\beta, B=DMB=D-M that is either Laplacian or SDDM matrix and parameter ϵ(0,1)\epsilon\in(0,1), outputs in O^(ϵ2m+ϵ4nT)\widehat{O}(\epsilon^{-2}m + \epsilon^{-4}nT) time a spectral sparsifier DM^NϵDDi=0Nβi(D1M)iD-\widehat{M}_{N} \approx_{\epsilon} D-D\sum_{i=0}^{N}\beta_{i}(D^{-1} M)^i of a matrix-polynomial, where O^()\widehat{O}(\cdot) notation hides poly(logn,logN)\mathrm{poly}(\log n,\log N) factors. This improves the Cheng et al.'s [CCLPT15] algorithm whose run time is O^(ϵ2mN2+NT)\widehat{O}(\epsilon^{-2} m N^2 + NT). Furthermore, our algorithm is parallelizable and runs in work O^(ϵ2m+ϵ4nT)\widehat{O}(\epsilon^{-2}m + \epsilon^{-4}nT) and depth O(logNpoly(logn)+logT)O(\log N\cdot\mathrm{poly}(\log n)+\log T). Our main algorithmic contribution is to propose the first efficient parallel algorithm that on input continuous p.d.f. wCw\in\mathcal{C}, matrix B=DMB=D-M as above, outputs a spectral sparsifier of matrix-polynomial whose coefficients approximate component-wise the discretized p.d.f. w^\widehat{w}. Our results yield the first efficient and parallel algorithm that runs in nearly linear work and poly-logarithmic depth and analyzes the long term behaviour of Markov chains in non-trivial settings. In addition, we strengthen the Spielman and Peng's [PS14] parallel SDD solver

    Quantum Speedup for Graph Sparsification, Cut Approximation and Laplacian Solving

    Full text link
    Graph sparsification underlies a large number of algorithms, ranging from approximation algorithms for cut problems to solvers for linear systems in the graph Laplacian. In its strongest form, "spectral sparsification" reduces the number of edges to near-linear in the number of nodes, while approximately preserving the cut and spectral structure of the graph. In this work we demonstrate a polynomial quantum speedup for spectral sparsification and many of its applications. In particular, we give a quantum algorithm that, given a weighted graph with nn nodes and mm edges, outputs a classical description of an ϵ\epsilon-spectral sparsifier in sublinear time O~(mn/ϵ)\tilde{O}(\sqrt{mn}/\epsilon). This contrasts with the optimal classical complexity O~(m)\tilde{O}(m). We also prove that our quantum algorithm is optimal up to polylog-factors. The algorithm builds on a string of existing results on sparsification, graph spanners, quantum algorithms for shortest paths, and efficient constructions for kk-wise independent random strings. Our algorithm implies a quantum speedup for solving Laplacian systems and for approximating a range of cut problems such as min cut and sparsest cut.Comment: v2: several small improvements to the text. An extended abstract will appear in FOCS'20; v3: corrected a minor mistake in Appendix

    Density Independent Algorithms for Sparsifying k-Step Random Walks

    Get PDF
    We give faster algorithms for producing sparse approximations of the transition matrices of k-step random walks on undirected and weighted graphs. These transition matrices also form graphs, and arise as intermediate objects in a variety of graph algorithms. Our improvements are based on a better understanding of processes that sample such walks, as well as tighter bounds on key weights underlying these sampling processes. On a graph with n vertices and m edges, our algorithm produces a graph with about nlog(n) edges that approximates the k-step random walk graph in about m + k^2 nlog^4(n) time. In order to obtain this runtime bound, we also revisit "density independent" algorithms for sparsifying graphs whose runtime overhead is expressed only in terms of the number of vertices

    Quantum speedup for graph sparsification, cut approximation, and Laplacian solving

    Get PDF
    Graph sparsification underlies a large number of algorithms, ranging from approximation algorithms for cut problems to solvers for linear systems in the graph Laplacian. In its strongest form, “spectral sparsification” reduces the number of edges to near-linear in the number of nodes, while approximately preserving the cut and spectral structure of the graph. In this work we demonstrate a polynomial quantum speedup for spectral sparsification and many of its applications. In particular, we give a quantum algorithm that, given a weighted graph with n nodes and m edges, outputs a classical description of an ϵ -spectral sparsifier in sublinear time O˜(mn−−−√/ϵ) . This contrasts with the optimal classical complexity O˜(m) . We also prove that our quantum algorithm is optimal up to polylog-factors. The algorithm builds on a string of existing results on sparsification, graph spanners, quantum algorithms for shortest paths, and efficient constructions for k -wise independent random strings. Our algorithm implies a quantum speedup for solving Laplacian systems and for approximating a range of cut problems such as min cut and sparsest cut

    Better Sparsifiers for Directed Eulerian Graphs

    Full text link
    Spectral sparsification for directed Eulerian graphs is a key component in the design of fast algorithms for solving directed Laplacian linear systems. Directed Laplacian linear system solvers are crucial algorithmic primitives to fast computation of fundamental problems on random walks, such as computing stationary distribution, hitting and commute time, and personalized PageRank vectors. While spectral sparsification is well understood for undirected graphs and it is known that for every graph G,G, (1+ε)(1+\varepsilon)-sparsifiers with O(nε2)O(n\varepsilon^{-2}) edges exist [Batson-Spielman-Srivastava, STOC '09] (which is optimal), the best known constructions of Eulerian sparsifiers require Ω(nε2log4n)\Omega(n\varepsilon^{-2}\log^4 n) edges and are based on short-cycle decompositions [Chu et al., FOCS '18]. In this paper, we give improved constructions of Eulerian sparsifiers, specifically: 1. We show that for every directed Eulerian graph G,\vec{G}, there exist an Eulerian sparsifier with O(nε2log2nlog2logn+nε4/3log8/3n)O(n\varepsilon^{-2} \log^2 n \log^2\log n + n\varepsilon^{-4/3}\log^{8/3} n) edges. This result is based on combining short-cycle decompositions [Chu-Gao-Peng-Sachdeva-Sawlani-Wang, FOCS '18, SICOMP] and [Parter-Yogev, ICALP '19], with recent progress on the matrix Spencer conjecture [Bansal-Meka-Jiang, STOC '23]. 2. We give an improved analysis of the constructions based on short-cycle decompositions, giving an m1+δm^{1+\delta}-time algorithm for any constant δ>0\delta > 0 for constructing Eulerian sparsifiers with O(nε2log3n)O(n\varepsilon^{-2}\log^3 n) edges

    Optimal Sublinear Sampling of Spanning Trees and Determinantal Point Processes via Average-Case Entropic Independence

    Full text link
    We design fast algorithms for repeatedly sampling from strongly Rayleigh distributions, which include random spanning tree distributions and determinantal point processes. For a graph G=(V,E)G=(V, E), we show how to approximately sample uniformly random spanning trees from GG in O~(V)\widetilde{O}(\lvert V\rvert) time per sample after an initial O~(E)\widetilde{O}(\lvert E\rvert) time preprocessing. For a determinantal point process on subsets of size kk of a ground set of nn elements, we show how to approximately sample in O~(kω)\widetilde{O}(k^\omega) time after an initial O~(nkω1)\widetilde{O}(nk^{\omega-1}) time preprocessing, where ω<2.372864\omega<2.372864 is the matrix multiplication exponent. We even improve the state of the art for obtaining a single sample from determinantal point processes, from the prior runtime of O~(min{nk2,nω})\widetilde{O}(\min\{nk^2, n^\omega\}) to O~(nkω1)\widetilde{O}(nk^{\omega-1}). In our main technical result, we achieve the optimal limit on domain sparsification for strongly Rayleigh distributions. In domain sparsification, sampling from a distribution μ\mu on ([n]k)\binom{[n]}{k} is reduced to sampling from related distributions on ([t]k)\binom{[t]}{k} for tnt\ll n. We show that for strongly Rayleigh distributions, we can can achieve the optimal t=O~(k)t=\widetilde{O}(k). Our reduction involves sampling from O~(1)\widetilde{O}(1) domain-sparsified distributions, all of which can be produced efficiently assuming convenient access to approximate overestimates for marginals of μ\mu. Having access to marginals is analogous to having access to the mean and covariance of a continuous distribution, or knowing "isotropy" for the distribution, the key assumption behind the Kannan-Lov\'asz-Simonovits (KLS) conjecture and optimal samplers based on it. We view our result as a moral analog of the KLS conjecture and its consequences for sampling, for discrete strongly Rayleigh measures

    On Fully Dynamic Graph Sparsifiers

    No full text
    We initiate the study of dynamic algorithms for graph sparsification problems and obtain fully dynamic algorithms, allowing both edge insertions and edge deletions, that take polylogarithmic time after each update in the graph. Our three main results are as follows. First, we give a fully dynamic algorithm for maintaining a (1±ϵ) (1 \pm \epsilon) -spectral sparsifier with amortized update time poly(logn,ϵ1)poly(\log{n}, \epsilon^{-1}). Second, we give a fully dynamic algorithm for maintaining a (1±ϵ) (1 \pm \epsilon) -cut sparsifier with \emph{worst-case} update time poly(logn,ϵ1)poly(\log{n}, \epsilon^{-1}). Both sparsifiers have size npoly(logn,ϵ1) n \cdot poly(\log{n}, \epsilon^{-1}). Third, we apply our dynamic sparsifier algorithm to obtain a fully dynamic algorithm for maintaining a (1+ϵ)(1 + \epsilon)-approximation to the value of the maximum flow in an unweighted, undirected, bipartite graph with amortized update time poly(logn,ϵ1)poly(\log{n}, \epsilon^{-1})
    corecore