134 research outputs found
Simple parallel and distributed algorithms for spectral graph sparsification
We describe a simple algorithm for spectral graph sparsification, based on
iterative computations of weighted spanners and uniform sampling. Leveraging
the algorithms of Baswana and Sen for computing spanners, we obtain the first
distributed spectral sparsification algorithm. We also obtain a parallel
algorithm with improved work and time guarantees. Combining this algorithm with
the parallel framework of Peng and Spielman for solving symmetric diagonally
dominant linear systems, we get a parallel solver which is much closer to being
practical and significantly more efficient in terms of the total work.Comment: replaces "A simple parallel and distributed algorithm for spectral
sparsification". Minor change
A Matrix Hyperbolic Cosine Algorithm and Applications
In this paper, we generalize Spencer's hyperbolic cosine algorithm to the
matrix-valued setting. We apply the proposed algorithm to several problems by
analyzing its computational efficiency under two special cases of matrices; one
in which the matrices have a group structure and an other in which they have
rank-one. As an application of the former case, we present a deterministic
algorithm that, given the multiplication table of a finite group of size ,
it constructs an expanding Cayley graph of logarithmic degree in near-optimal
O(n^2 log^3 n) time. For the latter case, we present a fast deterministic
algorithm for spectral sparsification of positive semi-definite matrices, which
implies an improved deterministic algorithm for spectral graph sparsification
of dense graphs. In addition, we give an elementary connection between spectral
sparsification of positive semi-definite matrices and element-wise matrix
sparsification. As a consequence, we obtain improved element-wise
sparsification algorithms for diagonally dominant-like matrices.Comment: 16 pages, simplified proof and corrected acknowledging of prior work
in (current) Section
An Efficient Parallel Solver for SDD Linear Systems
We present the first parallel algorithm for solving systems of linear
equations in symmetric, diagonally dominant (SDD) matrices that runs in
polylogarithmic time and nearly-linear work. The heart of our algorithm is a
construction of a sparse approximate inverse chain for the input matrix: a
sequence of sparse matrices whose product approximates its inverse. Whereas
other fast algorithms for solving systems of equations in SDD matrices exploit
low-stretch spanning trees, our algorithm only requires spectral graph
sparsifiers
An Efficient Parallel Algorithm for Spectral Sparsification of Laplacian and SDDM Matrix Polynomials
For "large" class of continuous probability density functions
(p.d.f.), we demonstrate that for every there is mixture of
discrete Binomial distributions (MDBD) with
distinct Binomial distributions that -approximates a
discretized p.d.f. for all , where
. Also, we give two efficient parallel
algorithms to find such MDBD.
Moreover, we propose a sequential algorithm that on input MDBD with
for that induces a discretized p.d.f. ,
that is either Laplacian or SDDM matrix and parameter ,
outputs in time a spectral
sparsifier of a matrix-polynomial, where
notation hides factors.
This improves the Cheng et al.'s [CCLPT15] algorithm whose run time is
.
Furthermore, our algorithm is parallelizable and runs in work
and depth . Our main algorithmic contribution is to
propose the first efficient parallel algorithm that on input continuous p.d.f.
, matrix as above, outputs a spectral sparsifier of
matrix-polynomial whose coefficients approximate component-wise the discretized
p.d.f. .
Our results yield the first efficient and parallel algorithm that runs in
nearly linear work and poly-logarithmic depth and analyzes the long term
behaviour of Markov chains in non-trivial settings. In addition, we strengthen
the Spielman and Peng's [PS14] parallel SDD solver
Approximate Gaussian Elimination for Laplacians: Fast, Sparse, and Simple
We show how to perform sparse approximate Gaussian elimination for Laplacian
matrices. We present a simple, nearly linear time algorithm that approximates a
Laplacian by a matrix with a sparse Cholesky factorization, the version of
Gaussian elimination for symmetric matrices. This is the first nearly linear
time solver for Laplacian systems that is based purely on random sampling, and
does not use any graph theoretic constructions such as low-stretch trees,
sparsifiers, or expanders. The crux of our analysis is a novel concentration
bound for matrix martingales where the differences are sums of conditionally
independent variables
Probabilistic Spectral Sparsification In Sublinear Time
In this paper, we introduce a variant of spectral sparsification, called
probabilistic -spectral sparsification. Roughly speaking,
it preserves the cut value of any cut with an
multiplicative error and a additive error. We show how
to produce a probabilistic -spectral sparsifier with
edges in time
time for unweighted undirected graph. This gives fastest known sub-linear time
algorithms for different cut problems on unweighted undirected graph such as
- An time -approximation
algorithm for the sparsest cut problem and the balanced separator problem.
- A time approximation minimum s-t cut algorithm
with an additive error
- …