350 research outputs found
Sparse Solution of Underdetermined Linear Equations via Adaptively Iterative Thresholding
Finding the sparset solution of an underdetermined system of linear equations
has attracted considerable attention in recent years. Among a large
number of algorithms, iterative thresholding algorithms are recognized as one
of the most efficient and important classes of algorithms. This is mainly due
to their low computational complexities, especially for large scale
applications. The aim of this paper is to provide guarantees on the global
convergence of a wide class of iterative thresholding algorithms. Since the
thresholds of the considered algorithms are set adaptively at each iteration,
we call them adaptively iterative thresholding (AIT) algorithms. As the main
result, we show that as long as satisfies a certain coherence property, AIT
algorithms can find the correct support set within finite iterations, and then
converge to the original sparse solution exponentially fast once the correct
support set has been identified. Meanwhile, we also demonstrate that AIT
algorithms are robust to the algorithmic parameters. In addition, it should be
pointed out that most of the existing iterative thresholding algorithms such as
hard, soft, half and smoothly clipped absolute deviation (SCAD) algorithms are
included in the class of AIT algorithms studied in this paper.Comment: 33 pages, 1 figur
Computational Methods for Sparse Solution of Linear Inverse Problems
The goal of the sparse approximation problem is to approximate a target signal using a linear combination of a few elementary signals drawn from a fixed collection. This paper surveys the major practical algorithms for sparse approximation. Specific attention is paid to computational issues, to the circumstances in which individual methods tend to perform well, and to the theoretical guarantees available. Many fundamental questions in electrical engineering, statistics, and applied mathematics can be posed as sparse approximation problems, making these algorithms versatile and relevant to a plethora of applications
Uniform Sampling for Matrix Approximation
Random sampling has become a critical tool in solving massive matrix
problems. For linear regression, a small, manageable set of data rows can be
randomly selected to approximate a tall, skinny data matrix, improving
processing time significantly. For theoretical performance guarantees, each row
must be sampled with probability proportional to its statistical leverage
score. Unfortunately, leverage scores are difficult to compute.
A simple alternative is to sample rows uniformly at random. While this often
works, uniform sampling will eliminate critical row information for many
natural instances. We take a fresh look at uniform sampling by examining what
information it does preserve. Specifically, we show that uniform sampling
yields a matrix that, in some sense, well approximates a large fraction of the
original. While this weak form of approximation is not enough for solving
linear regression directly, it is enough to compute a better approximation.
This observation leads to simple iterative row sampling algorithms for matrix
approximation that run in input-sparsity time and preserve row structure and
sparsity at all intermediate steps. In addition to an improved understanding of
uniform sampling, our main proof introduces a structural result of independent
interest: we show that every matrix can be made to have low coherence by
reweighting a small subset of its rows
- β¦