172 research outputs found
An Efficient Parallel Algorithm for Spectral Sparsification of Laplacian and SDDM Matrix Polynomials
For "large" class of continuous probability density functions
(p.d.f.), we demonstrate that for every there is mixture of
discrete Binomial distributions (MDBD) with
distinct Binomial distributions that -approximates a
discretized p.d.f. for all , where
. Also, we give two efficient parallel
algorithms to find such MDBD.
Moreover, we propose a sequential algorithm that on input MDBD with
for that induces a discretized p.d.f. ,
that is either Laplacian or SDDM matrix and parameter ,
outputs in time a spectral
sparsifier of a matrix-polynomial, where
notation hides factors.
This improves the Cheng et al.'s [CCLPT15] algorithm whose run time is
.
Furthermore, our algorithm is parallelizable and runs in work
and depth . Our main algorithmic contribution is to
propose the first efficient parallel algorithm that on input continuous p.d.f.
, matrix as above, outputs a spectral sparsifier of
matrix-polynomial whose coefficients approximate component-wise the discretized
p.d.f. .
Our results yield the first efficient and parallel algorithm that runs in
nearly linear work and poly-logarithmic depth and analyzes the long term
behaviour of Markov chains in non-trivial settings. In addition, we strengthen
the Spielman and Peng's [PS14] parallel SDD solver
An Alon-Boppana Type Bound for Weighted Graphs and Lowerbounds for Spectral Sparsification
We prove the following Alon-Boppana type theorem for general (not necessarily
regular) weighted graphs: if is an -node weighted undirected graph of
average combinatorial degree (that is, has edges) and girth , and if are the
eigenvalues of the (non-normalized) Laplacian of , then (The Alon-Boppana theorem implies that if is unweighted and
-regular, then if the diameter is at least .)
Our result implies a lower bound for spectral sparsifiers. A graph is a
spectral -sparsifier of a graph if where is the Laplacian matrix of and is
the Laplacian matrix of . Batson, Spielman and Srivastava proved that for
every there is an -sparsifier of average degree where
and the edges of are a
(weighted) subset of the edges of . Batson, Spielman and Srivastava also
show that the bound on cannot be reduced below when is a clique; our Alon-Boppana-type result implies that
cannot be reduced below when comes
from a family of expanders of super-constant degree and super-constant girth.
The method of Batson, Spielman and Srivastava proves a more general result,
about sparsifying sums of rank-one matrices, and their method applies to an
"online" setting. We show that for the online matrix setting the bound is tight, up to lower order terms
Domain Sparsification of Discrete Distributions Using Entropic Independence
We present a framework for speeding up the time it takes to sample from discrete distributions ? defined over subsets of size k of a ground set of n elements, in the regime where k is much smaller than n. We show that if one has access to estimates of marginals P_{S? ?} {i ? S}, then the task of sampling from ? can be reduced to sampling from related distributions ? supported on size k subsets of a ground set of only n^{1-?}? poly(k) elements. Here, 1/? ? [1, k] is the parameter of entropic independence for ?. Further, our algorithm only requires sparsified distributions ? that are obtained by applying a sparse (mostly 0) external field to ?, an operation that for many distributions ? of interest, retains algorithmic tractability of sampling from ?. This phenomenon, which we dub domain sparsification, allows us to pay a one-time cost of estimating the marginals of ?, and in return reduce the amortized cost needed to produce many samples from the distribution ?, as is often needed in upstream tasks such as counting and inference.
For a wide range of distributions where ? = ?(1), our result reduces the domain size, and as a corollary, the cost-per-sample, by a poly(n) factor. Examples include monomers in a monomer-dimer system, non-symmetric determinantal point processes, and partition-constrained Strongly Rayleigh measures. Our work significantly extends the reach of prior work of Anari and Derezi?ski who obtained domain sparsification for distributions with a log-concave generating polynomial (corresponding to ? = 1). As a corollary of our new analysis techniques, we also obtain a less stringent requirement on the accuracy of marginal estimates even for the case of log-concave polynomials; roughly speaking, we show that constant-factor approximation is enough for domain sparsification, improving over O(1/k) relative error established in prior work
An Alon-Boppana type bound for weighted graphs and lowerbounds for spectral sparsification
No abstract availabl
Twice-Ramanujan Sparsifiers
We prove that every graph has a spectral sparsifier with a number of edges
linear in its number of vertices. As linear-sized spectral sparsifiers of
complete graphs are expanders, our sparsifiers of arbitrary graphs can be
viewed as generalizations of expander graphs.
In particular, we prove that for every and every undirected, weighted
graph on vertices, there exists a weighted graph
with at most \ceil{d(n-1)} edges such that for every , where and
are the Laplacian matrices of and , respectively. Thus,
approximates spectrally at least as well as a Ramanujan expander with
edges approximates the complete graph. We give an elementary
deterministic polynomial time algorithm for constructing
Density Independent Algorithms for Sparsifying k-Step Random Walks
We give faster algorithms for producing sparse approximations of the transition matrices of k-step random walks on undirected and weighted graphs. These transition matrices also form graphs, and arise as intermediate objects in a variety of graph algorithms. Our improvements are based on a better understanding of processes that sample such walks, as well as tighter bounds on key weights underlying these sampling processes. On a graph with n vertices and m edges, our algorithm produces a graph with about nlog(n) edges that approximates the k-step random walk graph in about m + k^2 nlog^4(n) time. In order to obtain this runtime bound, we also revisit "density independent" algorithms for sparsifying graphs whose runtime overhead is expressed only in terms of the number of vertices
Recommended from our members
Quantum Algorithms for Matrix Problems and Machine Learning
This dissertation presents a study of quantum algorithms for problems that can be posed as matrix function tasks. In Chapter 1 we demonstrate a simple unifying framework for implementing of smooth functions of matrices on a quantum computer. This framework captures a variety of problems that can be solved by evaluating properties of some function of a matrix, and we identify speedups over classical algorithms for some problem classes. The analysis combines ideas from the classical theory of function approximation with the quantum algorithmic primitive of implementing linear combinations of unitary operators.
In Chapter 2 we continue this study by looking at the role of sparsity of input matrices in constructing efficient quantum algorithms. We show that classically pre-processing an input matrix by spectral sparsification can be profitable for quantum Hamiltonian simulation algorithms, without compromising the simulation error or complexity. Such preprocessing incurs a one time cost linear in the size of the matrix, but can be exploited to exponentially speed up subsequent subroutines such as inversion.
In Chapter 3, we give an application of this theory of matrix functions to the problem of estimating the Renyi entropy of an unknown quantum state. We combine matrix function techniques with mixed state quantum computation in the one-clean qubit model, and are able to bound of the expected runtime of our algorithm in terms of the unknown target quantity.
In addition to the theme of analysing the complexity of our algorithms, we also identify instances that are of practical relevance, leading us to some problems of machine learning. In Chapter 4 we investigate kernel based learning methods using random features. We work
with the QRAM input model suitable for big data, and show how matrix functions and the quantum Fourier transform can be used to devise a quantum algorithm for sampling random features that are optimised for given input data and choice of kernel. We obtain a potential exponential speedup over the best known classical algorithm even without explicit assumptions of sparsity or low rank.
Finally in Chapter 5 we consider the technique of beamsearch decoding used in natural language processing. We work in the query model, and show how quantum search with advice can be used to construct a quantum search decoder that can find the optimal parse (which may for instance be a best translation, or text-to-speech transcript) at least quadratically faster than the best known classical algorithms, and obtain super-quadratic speedups in the expected runtime.Science and Engineering Research Board (Department of Science and Technology), Government of Indi
Quantum Speedup for Graph Sparsification, Cut Approximation and Laplacian Solving
Graph sparsification underlies a large number of algorithms, ranging from
approximation algorithms for cut problems to solvers for linear systems in the
graph Laplacian. In its strongest form, "spectral sparsification" reduces the
number of edges to near-linear in the number of nodes, while approximately
preserving the cut and spectral structure of the graph. In this work we
demonstrate a polynomial quantum speedup for spectral sparsification and many
of its applications. In particular, we give a quantum algorithm that, given a
weighted graph with nodes and edges, outputs a classical description of
an -spectral sparsifier in sublinear time
. This contrasts with the optimal classical
complexity . We also prove that our quantum algorithm is optimal
up to polylog-factors. The algorithm builds on a string of existing results on
sparsification, graph spanners, quantum algorithms for shortest paths, and
efficient constructions for -wise independent random strings. Our algorithm
implies a quantum speedup for solving Laplacian systems and for approximating a
range of cut problems such as min cut and sparsest cut.Comment: v2: several small improvements to the text. An extended abstract will
appear in FOCS'20; v3: corrected a minor mistake in Appendix
- …