73 research outputs found

    Probabilistic Spectral Sparsification In Sublinear Time

    Full text link
    In this paper, we introduce a variant of spectral sparsification, called probabilistic (ε,δ)(\varepsilon,\delta)-spectral sparsification. Roughly speaking, it preserves the cut value of any cut (S,Sc)(S,S^{c}) with an 1±ε1\pm\varepsilon multiplicative error and a δS\delta\left|S\right| additive error. We show how to produce a probabilistic (ε,δ)(\varepsilon,\delta)-spectral sparsifier with O(nlogn/ε2)O(n\log n/\varepsilon^{2}) edges in time O~(n/ε2δ)\tilde{O}(n/\varepsilon^{2}\delta) time for unweighted undirected graph. This gives fastest known sub-linear time algorithms for different cut problems on unweighted undirected graph such as - An O~(n/OPT+n3/2+t)\tilde{O}(n/OPT+n^{3/2+t}) time O(logn/t)O(\sqrt{\log n/t})-approximation algorithm for the sparsest cut problem and the balanced separator problem. - A n1+o(1)/ε4n^{1+o(1)}/\varepsilon^{4} time approximation minimum s-t cut algorithm with an εn\varepsilon n additive error

    Constructing Linear-Sized Spectral Sparsification in Almost-Linear Time

    Full text link
    We present the first almost-linear time algorithm for constructing linear-sized spectral sparsification for graphs. This improves all previous constructions of linear-sized spectral sparsification, which requires Ω(n2)\Omega(n^2) time. A key ingredient in our algorithm is a novel combination of two techniques used in literature for constructing spectral sparsification: Random sampling by effective resistance, and adaptive constructions based on barrier functions.Comment: 22 pages. A preliminary version of this paper is to appear in proceedings of the 56th Annual IEEE Symposium on Foundations of Computer Science (FOCS 2015

    Online Row Sampling

    Get PDF
    Finding a small spectral approximation for a tall n×dn \times d matrix AA is a fundamental numerical primitive. For a number of reasons, one often seeks an approximation whose rows are sampled from those of AA. Row sampling improves interpretability, saves space when AA is sparse, and preserves row structure, which is especially important, for example, when AA represents a graph. However, correctly sampling rows from AA can be costly when the matrix is large and cannot be stored and processed in memory. Hence, a number of recent publications focus on row sampling in the streaming setting, using little more space than what is required to store the outputted approximation [KL13, KLM+14]. Inspired by a growing body of work on online algorithms for machine learning and data analysis, we extend this work to a more restrictive online setting: we read rows of AA one by one and immediately decide whether each row should be kept in the spectral approximation or discarded, without ever retracting these decisions. We present an extremely simple algorithm that approximates AA up to multiplicative error ϵ\epsilon and additive error δ\delta using O(dlogdlog(ϵA2/δ)/ϵ2)O(d \log d \log(\epsilon||A||_2/\delta)/\epsilon^2) online samples, with memory overhead proportional to the cost of storing the spectral approximation. We also present an algorithm that uses O(d2O(d^2) memory but only requires O(dlog(ϵA2/δ)/ϵ2)O(d\log(\epsilon||A||_2/\delta)/\epsilon^2) samples, which we show is optimal. Our methods are clean and intuitive, allow for lower memory usage than prior work, and expose new theoretical properties of leverage score based matrix approximation

    An Efficient Parallel Solver for SDD Linear Systems

    Full text link
    We present the first parallel algorithm for solving systems of linear equations in symmetric, diagonally dominant (SDD) matrices that runs in polylogarithmic time and nearly-linear work. The heart of our algorithm is a construction of a sparse approximate inverse chain for the input matrix: a sequence of sparse matrices whose product approximates its inverse. Whereas other fast algorithms for solving systems of equations in SDD matrices exploit low-stretch spanning trees, our algorithm only requires spectral graph sparsifiers

    Large-scale semi-supervised learning with online spectral graph sparsification

    Get PDF
    International audienceWe introduce Sparse-HFS, a scalable algorithm that can compute solutions to SSL problems using only O(n polylog(n)) space and O(m polylog(n)) time
    corecore