425,656 research outputs found
Sparse Matrix Factorization
We investigate the problem of factorizing a matrix into several sparse
matrices and propose an algorithm for this under randomness and sparsity
assumptions. This problem can be viewed as a simplification of the deep
learning problem where finding a factorization corresponds to finding edges in
different layers and values of hidden units. We prove that under certain
assumptions for a sparse linear deep network with nodes in each layer, our
algorithm is able to recover the structure of the network and values of top
layer hidden units for depths up to . We further discuss the
relation among sparse matrix factorization, deep learning, sparse recovery and
dictionary learning.Comment: 20 page
DOA Estimation in Partially Correlated Noise Using Low-Rank/Sparse Matrix Decomposition
We consider the problem of direction-of-arrival (DOA) estimation in unknown
partially correlated noise environments where the noise covariance matrix is
sparse. A sparse noise covariance matrix is a common model for a sparse array
of sensors consisted of several widely separated subarrays. Since interelement
spacing among sensors in a subarray is small, the noise in the subarray is in
general spatially correlated, while, due to large distances between subarrays,
the noise between them is uncorrelated. Consequently, the noise covariance
matrix of such an array has a block diagonal structure which is indeed sparse.
Moreover, in an ordinary nonsparse array, because of small distance between
adjacent sensors, there is noise coupling between neighboring sensors, whereas
one can assume that nonadjacent sensors have spatially uncorrelated noise which
makes again the array noise covariance matrix sparse. Utilizing some recently
available tools in low-rank/sparse matrix decomposition, matrix completion, and
sparse representation, we propose a novel method which can resolve possibly
correlated or even coherent sources in the aforementioned partly correlated
noise. In particular, when the sources are uncorrelated, our approach involves
solving a second-order cone programming (SOCP), and if they are correlated or
coherent, one needs to solve a computationally harder convex program. We
demonstrate the effectiveness of the proposed algorithm by numerical
simulations and comparison to the Cramer-Rao bound (CRB).Comment: in IEEE Sensor Array and Multichannel signal processing workshop
(SAM), 201
Recovery of Low-Rank Plus Compressed Sparse Matrices with Application to Unveiling Traffic Anomalies
Given the superposition of a low-rank matrix plus the product of a known fat
compression matrix times a sparse matrix, the goal of this paper is to
establish deterministic conditions under which exact recovery of the low-rank
and sparse components becomes possible. This fundamental identifiability issue
arises with traffic anomaly detection in backbone networks, and subsumes
compressed sensing as well as the timely low-rank plus sparse matrix recovery
tasks encountered in matrix decomposition problems. Leveraging the ability of
- and nuclear norms to recover sparse and low-rank matrices, a convex
program is formulated to estimate the unknowns. Analysis and simulations
confirm that the said convex program can recover the unknowns for sufficiently
low-rank and sparse enough components, along with a compression matrix
possessing an isometry property when restricted to operate on sparse vectors.
When the low-rank, sparse, and compression matrices are drawn from certain
random ensembles, it is established that exact recovery is possible with high
probability. First-order algorithms are developed to solve the nonsmooth convex
optimization problem with provable iteration complexity guarantees. Insightful
tests with synthetic and real network data corroborate the effectiveness of the
novel approach in unveiling traffic anomalies across flows and time, and its
ability to outperform existing alternatives.Comment: 38 pages, submitted to the IEEE Transactions on Information Theor
Performance Analysis and Optimization of Sparse Matrix-Vector Multiplication on Modern Multi- and Many-Core Processors
This paper presents a low-overhead optimizer for the ubiquitous sparse
matrix-vector multiplication (SpMV) kernel. Architectural diversity among
different processors together with structural diversity among different sparse
matrices lead to bottleneck diversity. This justifies an SpMV optimizer that is
both matrix- and architecture-adaptive through runtime specialization. To this
direction, we present an approach that first identifies the performance
bottlenecks of SpMV for a given sparse matrix on the target platform either
through profiling or by matrix property inspection, and then selects suitable
optimizations to tackle those bottlenecks. Our optimization pool is based on
the widely used Compressed Sparse Row (CSR) sparse matrix storage format and
has low preprocessing overheads, making our overall approach practical even in
cases where fast decision making and optimization setup is required. We
evaluate our optimizer on three x86-based computing platforms and demonstrate
that it is able to distinguish and appropriately optimize SpMV for the majority
of matrices in a representative test suite, leading to significant speedups
over the CSR and Inspector-Executor CSR SpMV kernels available in the latest
release of the Intel MKL library.Comment: 10 pages, 7 figures, ICPP 201
- …
