102 research outputs found
Similarity-Aware Spectral Sparsification by Edge Filtering
In recent years, spectral graph sparsification techniques that can compute
ultra-sparse graph proxies have been extensively studied for accelerating
various numerical and graph-related applications. Prior nearly-linear-time
spectral sparsification methods first extract low-stretch spanning tree from
the original graph to form the backbone of the sparsifier, and then recover
small portions of spectrally-critical off-tree edges to the spanning tree to
significantly improve the approximation quality. However, it is not clear how
many off-tree edges should be recovered for achieving a desired spectral
similarity level within the sparsifier. Motivated by recent graph signal
processing techniques, this paper proposes a similarity-aware spectral graph
sparsification framework that leverages efficient spectral off-tree edge
embedding and filtering schemes to construct spectral sparsifiers with
guaranteed spectral similarity (relative condition number) level. An iterative
graph densification scheme is introduced to facilitate efficient and effective
filtering of off-tree edges for highly ill-conditioned problems. The proposed
method has been validated using various kinds of graphs obtained from public
domain sparse matrix collections relevant to VLSI CAD, finite element analysis,
as well as social and data networks frequently studied in many machine learning
and data mining applications
Towards Scalable Spectral Clustering via Spectrum-Preserving Sparsification
Eigenvalue decomposition of Laplacian matrices for large nearest-neighbor (NN)graphs is the major computational bottleneck in spectral clustering (SC). To fundamentally address this computational challenge in SC, we propose a scalable spectral sparsification framework that enables to construct nearly-linear-sized ultra-sparse NN graphs with guaranteed preservation of key eigenvalues and eigenvectors of the original Laplacian. The proposed method is based on the latest theoretical results in spectral graph theory and thus can be applied to robustly handle general undirected graphs. By leveraging a nearly-linear time spectral graph topology sparsification phase and a subgraph scaling phase via stochastic gradient descent (SGD) iterations, our approach allows computing tree-like NN graphs that can serve as high-quality proxies of the original NN graphs, leading to highly-scalable and accurate SC of large data sets. Our extensive experimental results on a variety of public domain data sets show dramatically improved performance when compared with state-of-the-art SC methods
Data driven approach to sparsification of reaction diffusion complex network systems
Graph sparsification is an area of interest in computer science and applied
mathematics. Sparsification of a graph, in general, aims to reduce the number
of edges in the network while preserving specific properties of the graph, like
cuts and subgraph counts. Computing the sparsest cuts of a graph is known to be
NP-hard, and sparsification routines exists for generating linear sized
sparsifiers in almost quadratic running time .
Consequently, obtaining a sparsifier can be a computationally demanding task
and the complexity varies based on the level of sparsity required. In this
study, we extend the concept of sparsification to the realm of
reaction-diffusion complex systems. We aim to address the challenge of reducing
the number of edges in the network while preserving the underlying flow
dynamics. To tackle this problem, we adopt a relaxed approach considering only
a subset of trajectories. We map the network sparsification problem to a data
assimilation problem on a Reduced Order Model (ROM) space with constraints
targeted at preserving the eigenmodes of the Laplacian matrix under
perturbations. The Laplacian matrix () is the difference between the
diagonal matrix of degrees () and the graph's adjacency matrix (). We
propose approximations to the eigenvalues and eigenvectors of the Laplacian
matrix subject to perturbations for computational feasibility and include a
custom function based on these approximations as a constraint on the data
assimilation framework. We demonstrate the extension of our framework to
achieve sparsity in parameter sets for Neural Ordinary Differential Equations
(neural ODEs)
- …