119 research outputs found

    An Efficient Parallel Algorithm for Spectral Sparsification of Laplacian and SDDM Matrix Polynomials

    Full text link
    For "large" class C\mathcal{C} of continuous probability density functions (p.d.f.), we demonstrate that for every wCw\in\mathcal{C} there is mixture of discrete Binomial distributions (MDBD) with TNϕw/δT\geq N\sqrt{\phi_{w}/\delta} distinct Binomial distributions B(,N)B(\cdot,N) that δ\delta-approximates a discretized p.d.f. w^(i/N)w(i/N)/[=0Nw(/N)]\widehat{w}(i/N)\triangleq w(i/N)/[\sum_{\ell=0}^{N}w(\ell/N)] for all i[3:N3]i\in[3:N-3], where ϕwmaxx[0,1]w(x)\phi_{w}\geq\max_{x\in[0,1]}|w(x)|. Also, we give two efficient parallel algorithms to find such MDBD. Moreover, we propose a sequential algorithm that on input MDBD with N=2kN=2^k for kN+k\in\mathbb{N}_{+} that induces a discretized p.d.f. β\beta, B=DMB=D-M that is either Laplacian or SDDM matrix and parameter ϵ(0,1)\epsilon\in(0,1), outputs in O^(ϵ2m+ϵ4nT)\widehat{O}(\epsilon^{-2}m + \epsilon^{-4}nT) time a spectral sparsifier DM^NϵDDi=0Nβi(D1M)iD-\widehat{M}_{N} \approx_{\epsilon} D-D\sum_{i=0}^{N}\beta_{i}(D^{-1} M)^i of a matrix-polynomial, where O^()\widehat{O}(\cdot) notation hides poly(logn,logN)\mathrm{poly}(\log n,\log N) factors. This improves the Cheng et al.'s [CCLPT15] algorithm whose run time is O^(ϵ2mN2+NT)\widehat{O}(\epsilon^{-2} m N^2 + NT). Furthermore, our algorithm is parallelizable and runs in work O^(ϵ2m+ϵ4nT)\widehat{O}(\epsilon^{-2}m + \epsilon^{-4}nT) and depth O(logNpoly(logn)+logT)O(\log N\cdot\mathrm{poly}(\log n)+\log T). Our main algorithmic contribution is to propose the first efficient parallel algorithm that on input continuous p.d.f. wCw\in\mathcal{C}, matrix B=DMB=D-M as above, outputs a spectral sparsifier of matrix-polynomial whose coefficients approximate component-wise the discretized p.d.f. w^\widehat{w}. Our results yield the first efficient and parallel algorithm that runs in nearly linear work and poly-logarithmic depth and analyzes the long term behaviour of Markov chains in non-trivial settings. In addition, we strengthen the Spielman and Peng's [PS14] parallel SDD solver

    Quantum Speedup for Graph Sparsification, Cut Approximation and Laplacian Solving

    Full text link
    Graph sparsification underlies a large number of algorithms, ranging from approximation algorithms for cut problems to solvers for linear systems in the graph Laplacian. In its strongest form, "spectral sparsification" reduces the number of edges to near-linear in the number of nodes, while approximately preserving the cut and spectral structure of the graph. In this work we demonstrate a polynomial quantum speedup for spectral sparsification and many of its applications. In particular, we give a quantum algorithm that, given a weighted graph with nn nodes and mm edges, outputs a classical description of an ϵ\epsilon-spectral sparsifier in sublinear time O~(mn/ϵ)\tilde{O}(\sqrt{mn}/\epsilon). This contrasts with the optimal classical complexity O~(m)\tilde{O}(m). We also prove that our quantum algorithm is optimal up to polylog-factors. The algorithm builds on a string of existing results on sparsification, graph spanners, quantum algorithms for shortest paths, and efficient constructions for kk-wise independent random strings. Our algorithm implies a quantum speedup for solving Laplacian systems and for approximating a range of cut problems such as min cut and sparsest cut.Comment: v2: several small improvements to the text. An extended abstract will appear in FOCS'20; v3: corrected a minor mistake in Appendix

    Space-Time Sampling for Network Observability

    Full text link
    Designing sparse sampling strategies is one of the important components in having resilient estimation and control in networked systems as they make network design problems more cost-effective due to their reduced sampling requirements and less fragile to where and when samples are collected. It is shown that under what conditions taking coarse samples from a network will contain the same amount of information as a more finer set of samples. Our goal is to estimate initial condition of linear time-invariant networks using a set of noisy measurements. The observability condition is reformulated as the frame condition, where one can easily trace location and time stamps of each sample. We compare estimation quality of various sampling strategies using estimation measures, which depend on spectrum of the corresponding frame operators. Using properties of the minimal polynomial of the state matrix, deterministic and randomized methods are suggested to construct observability frames. Intrinsic tradeoffs assert that collecting samples from fewer subsystems dictates taking more samples (in average) per subsystem. Three scalable algorithms are developed to generate sparse space-time sampling strategies with explicit error bounds.Comment: Submitted to IEEE TAC (Revised Version

    A Size-Free CLT for Poisson Multinomials and its Applications

    Full text link
    An (n,k)(n,k)-Poisson Multinomial Distribution (PMD) is the distribution of the sum of nn independent random vectors supported on the set Bk={e1,,ek}{\cal B}_k=\{e_1,\ldots,e_k\} of standard basis vectors in Rk\mathbb{R}^k. We show that any (n,k)(n,k)-PMD is poly(kσ){\rm poly}\left({k\over \sigma}\right)-close in total variation distance to the (appropriately discretized) multi-dimensional Gaussian with the same first two moments, removing the dependence on nn from the Central Limit Theorem of Valiant and Valiant. Interestingly, our CLT is obtained by bootstrapping the Valiant-Valiant CLT itself through the structural characterization of PMDs shown in recent work by Daskalakis, Kamath, and Tzamos. In turn, our stronger CLT can be leveraged to obtain an efficient PTAS for approximate Nash equilibria in anonymous games, significantly improving the state of the art, and matching qualitatively the running time dependence on nn and 1/ε1/\varepsilon of the best known algorithm for two-strategy anonymous games. Our new CLT also enables the construction of covers for the set of (n,k)(n,k)-PMDs, which are proper and whose size is shown to be essentially optimal. Our cover construction combines our CLT with the Shapley-Folkman theorem and recent sparsification results for Laplacian matrices by Batson, Spielman, and Srivastava. Our cover size lower bound is based on an algebraic geometric construction. Finally, leveraging the structural properties of the Fourier spectrum of PMDs we show that these distributions can be learned from Ok(1/ε2)O_k(1/\varepsilon^2) samples in polyk(1/ε){\rm poly}_k(1/\varepsilon)-time, removing the quasi-polynomial dependence of the running time on 1/ε1/\varepsilon from the algorithm of Daskalakis, Kamath, and Tzamos.Comment: To appear in STOC 201

    Singular Value Approximation and Sparsifying Random Walks on Directed Graphs

    Full text link
    In this paper, we introduce a new, spectral notion of approximation between directed graphs, which we call singular value (SV) approximation. SV-approximation is stronger than previous notions of spectral approximation considered in the literature, including spectral approximation of Laplacians for undirected graphs (Spielman Teng STOC 2004), standard approximation for directed graphs (Cohen et. al. STOC 2017), and unit-circle approximation for directed graphs (Ahmadinejad et. al. FOCS 2020). Further, SV approximation enjoys several useful properties not possessed by previous notions of approximation, e.g., it is preserved under products of random-walk matrices and bounded matrices. We provide a nearly linear-time algorithm for SV-sparsifying (and hence UC-sparsifying) Eulerian directed graphs, as well as \ell-step random walks on such graphs, for any poly(n)\ell\leq \text{poly}(n). Combined with the Eulerian scaling algorithms of (Cohen et. al. FOCS 2018), given an arbitrary (not necessarily Eulerian) directed graph and a set SS of vertices, we can approximate the stationary probability mass of the (S,Sc)(S,S^c) cut in an \ell-step random walk to within a multiplicative error of 1/polylog(n)1/\text{polylog}(n) and an additive error of 1/poly(n)1/\text{poly}(n) in nearly linear time. As a starting point for these results, we provide a simple black-box reduction from SV-sparsifying Eulerian directed graphs to SV-sparsifying undirected graphs; such a directed-to-undirected reduction was not known for previous notions of spectral approximation.Comment: FOCS 202
    corecore