39 research outputs found
Matrix Recipes for Hard Thresholding Methods
In this paper, we present and analyze a new set of low-rank recovery
algorithms for linear inverse problems within the class of hard thresholding
methods. We provide strategies on how to set up these algorithms via basic
ingredients for different configurations to achieve complexity vs. accuracy
tradeoffs. Moreover, we study acceleration schemes via memory-based techniques
and randomized, -approximate matrix projections to decrease the
computational costs in the recovery process. For most of the configurations, we
present theoretical analysis that guarantees convergence under mild problem
conditions. Simulation results demonstrate notable performance improvements as
compared to state-of-the-art algorithms both in terms of reconstruction
accuracy and computational complexity.Comment: 26 page
Matrix Recipes for Hard Thresholding Methods
In this paper, we present and analyze a new set of low-rank recovery algorithms for linear inverse problems within the class of hard thresholding methods. We provide strategies on how to set up these algorithms via basic ingredients for different configurations to achieve complexity vs. accuracy tradeoffs. Moreover, we study acceleration schemes via memory-based techniques and randomized, ϵ-approximate matrix projections to decrease the computational costs in the recovery process. For most of the configurations, we present theoretical analysis that guarantees convergence under mild problem conditions. Simulation results demonstrate notable performance improvements as compared to state-of-the-art algorithms both in terms of reconstruction accuracy and computational complexity
Randomized Low-Memory Singular Value Projection
Affine rank minimization algorithms typically rely on calculating the
gradient of a data error followed by a singular value decomposition at every
iteration. Because these two steps are expensive, heuristic approximations are
often used to reduce computational burden. To this end, we propose a recovery
scheme that merges the two steps with randomized approximations, and as a
result, operates on space proportional to the degrees of freedom in the
problem. We theoretically establish the estimation guarantees of the algorithm
as a function of approximation tolerance. While the theoretical approximation
requirements are overly pessimistic, we demonstrate that in practice the
algorithm performs well on the quantum tomography recovery problem.Comment: 13 pages. This version has a revised theorem and new numerical
experiment
A variational approach to stable principal component pursuit
We introduce a new convex formulation for stable principal component pursuit
(SPCP) to decompose noisy signals into low-rank and sparse representations. For
numerical solutions of our SPCP formulation, we first develop a convex
variational framework and then accelerate it with quasi-Newton methods. We
show, via synthetic and real data experiments, that our approach offers
advantages over the classical SPCP formulations in scalability and practical
parameter selection.Comment: 10 pages, 5 figure
Guarantees of Riemannian Optimization for Low Rank Matrix Completion
We study the Riemannian optimization methods on the embedded manifold of low
rank matrices for the problem of matrix completion, which is about recovering a
low rank matrix from its partial entries. Assume entries of an
rank matrix are sampled independently and uniformly with replacement. We
first prove that with high probability the Riemannian gradient descent and
conjugate gradient descent algorithms initialized by one step hard thresholding
are guaranteed to converge linearly to the measured matrix provided
\begin{align*} m\geq C_\kappa n^{1.5}r\log^{1.5}(n), \end{align*} where
is a numerical constant depending on the condition number of the
underlying matrix. The sampling complexity has been further improved to
\begin{align*} m\geq C_\kappa nr^2\log^{2}(n) \end{align*} via the resampled
Riemannian gradient descent initialization. The analysis of the new
initialization procedure relies on an asymmetric restricted isometry property
of the sampling operator and the curvature of the low rank matrix manifold.
Numerical simulation shows that the algorithms are able to recover a low rank
matrix from nearly the minimum number of measurements
Structured random measurements in signal processing
Compressed sensing and its extensions have recently triggered interest in
randomized signal acquisition. A key finding is that random measurements
provide sparse signal reconstruction guarantees for efficient and stable
algorithms with a minimal number of samples. While this was first shown for
(unstructured) Gaussian random measurement matrices, applications require
certain structure of the measurements leading to structured random measurement
matrices. Near optimal recovery guarantees for such structured measurements
have been developed over the past years in a variety of contexts. This article
surveys the theory in three scenarios: compressed sensing (sparse recovery),
low rank matrix recovery, and phaseless estimation. The random measurement
matrices to be considered include random partial Fourier matrices, partial
random circulant matrices (subsampled convolutions), matrix completion, and
phase estimation from magnitudes of Fourier type measurements. The article
concludes with a brief discussion of the mathematical techniques for the
analysis of such structured random measurements.Comment: 22 pages, 2 figure
Alternating Projections and Douglas-Rachford for Sparse Affine Feasibility
The problem of finding a vector with the fewest nonzero elements that
satisfies an underdetermined system of linear equations is an NP-complete
problem that is typically solved numerically via convex heuristics or
nicely-behaved nonconvex relaxations. In this work we consider elementary
methods based on projections for solving a sparse feasibility problem without
employing convex heuristics. In a recent paper Bauschke, Luke, Phan and Wang
(2014) showed that, locally, the fundamental method of alternating projections
must converge linearly to a solution to the sparse feasibility problem with an
affine constraint. In this paper we apply different analytical tools that allow
us to show global linear convergence of alternating projections under familiar
constraint qualifications. These analytical tools can also be applied to other
algorithms. This is demonstrated with the prominent Douglas-Rachford algorithm
where we establish local linear convergence of this method applied to the
sparse affine feasibility problem.Comment: 29 pages, 2 figures, 37 references. Much expanded version from last
submission. Title changed to reflect new development