108 research outputs found
On the construction of sparse matrices from expander graphs
We revisit the asymptotic analysis of probabilistic construction of adjacency
matrices of expander graphs proposed in [4]. With better bounds we derived a
new reduced sample complexity for the number of nonzeros per column of these
matrices, precisely ; as opposed to
the standard . This gives insights into
why using small performed well in numerical experiments involving such
matrices. Furthermore, we derive quantitative sampling theorems for our
constructions which show our construction outperforming the existing
state-of-the-art. We also used our results to compare performance of sparse
recovery algorithms where these matrices are used for linear sketching.Comment: 28 pages, 4 figure
On Constructing Spanners from Random Gaussian Projections
Graph sketching is a powerful paradigm for analyzing graph structure via linear measurements introduced by Ahn, Guha, and McGregor (SODA\u2712) that has since found numerous applications in streaming, distributed computing, and massively parallel algorithms, among others. Graph sketching has proven to be quite successful for various problems such as connectivity, minimum spanning trees, edge or vertex connectivity, and cut or spectral sparsifiers. Yet, the problem of approximating shortest path metric of a graph, and specifically computing a spanner, is notably missing from the list of successes. This has turned the status of this fundamental problem into one of the most longstanding open questions in this area.
We present a partial explanation of this lack of success by proving a strong lower bound for a large family of graph sketching algorithms that encompasses prior work on spanners and many (but importantly not also all) related cut-based problems mentioned above. Our lower bound matches the algorithmic bounds of the recent result of Filtser, Kapralov, and Nouri (SODA\u2721), up to lower order terms, for constructing spanners via the same graph sketching family. This establishes near-optimality of these bounds, at least restricted to this family of graph sketching techniques, and makes progress on a conjecture posed in this latter work
Expander -Decoding
We introduce two new algorithms, Serial- and Parallel- for
solving a large underdetermined linear system of equations when it is known that has at most
nonzero entries and that is the adjacency matrix of an unbalanced left
-regular expander graph. The matrices in this class are sparse and allow a
highly efficient implementation. A number of algorithms have been designed to
work exclusively under this setting, composing the branch of combinatorial
compressed-sensing (CCS).
Serial- and Parallel- iteratively minimise by successfully combining two desirable features of previous CCS
algorithms: the information-preserving strategy of ER, and the parallel
updating mechanism of SMP. We are able to link these elements and guarantee
convergence in operations by assuming that the signal
is dissociated, meaning that all of the subset sums of the support of
are pairwise different. However, we observe empirically that the signal need
not be exactly dissociated in practice. Moreover, we observe Serial-
and Parallel- to be able to solve large scale problems with a larger
fraction of nonzeros than other algorithms when the number of measurements is
substantially less than the signal length; in particular, they are able to
reliably solve for a -sparse vector from expander
measurements with and up to four times greater than what is
achievable by -regularization from dense Gaussian measurements.
Additionally, Serial- and Parallel- are observed to be able to
solve large problems sizes in substantially less time than other algorithms for
compressed sensing. In particular, Parallel- is structured to take
advantage of massively parallel architectures.Comment: 14 pages, 10 figure
Restricted Isometry Property for General p-Norms
The Restricted Isometry Property (RIP) is a fundamental property of a matrix
which enables sparse recovery. Informally, an matrix satisfies RIP
of order for the norm, if for every
vector with at most non-zero coordinates.
For every we obtain almost tight bounds on the minimum
number of rows necessary for the RIP property to hold. Prior to this work,
only the cases , , and were studied. Interestingly,
our results show that the case is a "singularity" point: the optimal
number of rows is for all , as opposed to for .
We also obtain almost tight bounds for the column sparsity of RIP matrices
and discuss implications of our results for the Stable Sparse Recovery problem.Comment: An extended abstract of this paper is to appear at the 31st
International Symposium on Computational Geometry (SoCG 2015
Expander Decomposition in Dynamic Streams
In this paper we initiate the study of expander decompositions of a graph
in the streaming model of computation. The goal is to find a
partitioning of vertices such that the subgraphs of
induced by the clusters are good expanders, while the
number of intercluster edges is small. Expander decompositions are classically
constructed by a recursively applying balanced sparse cuts to the input graph.
In this paper we give the first implementation of such a recursive sparsest cut
process using small space in the dynamic streaming model.
Our main algorithmic tool is a new type of cut sparsifier that we refer to as
a power cut sparsifier - it preserves cuts in any given vertex induced subgraph
(or, any cluster in a fixed partition of ) to within a -multiplicative/additive error with high probability. The power cut
sparsifier uses space and edges, which we show is
asymptotically tight up to polylogarithmic factors in for constant
.Comment: 31 pages, 0 figures, to appear in ITCS 202
On Weighted Graph Sparsification by Linear Sketching
A seminal work of [Ahn-Guha-McGregor, PODS'12] showed that one can compute a
cut sparsifier of an unweighted undirected graph by taking a near-linear number
of linear measurements on the graph. Subsequent works also studied computing
other graph sparsifiers using linear sketching, and obtained near-linear upper
bounds for spectral sparsifiers [Kapralov-Lee-Musco-Musco-Sidford, FOCS'14] and
first non-trivial upper bounds for spanners [Filtser-Kapralov-Nouri, SODA'21].
All these linear sketching algorithms, however, only work on unweighted graphs.
In this paper, we initiate the study of weighted graph sparsification by
linear sketching by investigating a natural class of linear sketches that we
call incidence sketches, in which each measurement is a linear combination of
the weights of edges incident on a single vertex. Our results are:
1. Weighted cut sparsification: We give an algorithm that computes a -cut sparsifier using linear
measurements, which is nearly optimal.
2. Weighted spectral sparsification: We give an algorithm that computes a -spectral sparsifier using
linear measurements. Complementing our algorithm, we then prove a superlinear
lower bound of measurements for computing some
-spectral sparsifier using incidence sketches.
3. Weighted spanner computation: We focus on graphs whose largest/smallest
edge weights differ by an factor, and prove that, for incidence
sketches, the upper bounds obtained by~[Filtser-Kapralov-Nouri, SODA'21] are
optimal up to an factor
Expander Decomposition in Dynamic Streams
In this paper we initiate the study of expander decompositions of a graph G = (V, E) in the streaming model of computation. The goal is to find a partitioning ? of vertices V such that the subgraphs of G induced by the clusters C ? ? are good expanders, while the number of intercluster edges is small. Expander decompositions are classically constructed by a recursively applying balanced sparse cuts to the input graph. In this paper we give the first implementation of such a recursive sparsest cut process using small space in the dynamic streaming model.
Our main algorithmic tool is a new type of cut sparsifier that we refer to as a power cut sparsifier - it preserves cuts in any given vertex induced subgraph (or, any cluster in a fixed partition of V) to within a (?, ?)-multiplicative/additive error with high probability. The power cut sparsifier uses O?(n/??) space and edges, which we show is asymptotically tight up to polylogarithmic factors in n for constant ?
- …