2,118 research outputs found
A Simplified Approach to Recovery Conditions for Low Rank Matrices
Recovering sparse vectors and low-rank matrices from noisy linear
measurements has been the focus of much recent research. Various reconstruction
algorithms have been studied, including and nuclear norm minimization
as well as minimization with . These algorithms are known to
succeed if certain conditions on the measurement map are satisfied. Proofs of
robust recovery for matrices have so far been much more involved than in the
vector case.
In this paper, we show how several robust classes of recovery conditions can
be extended from vectors to matrices in a simple and transparent way, leading
to the best known restricted isometry and nullspace conditions for matrix
recovery. Our results rely on the ability to "vectorize" matrices through the
use of a key singular value inequality.Comment: 6 pages, This is a modified version of a paper submitted to ISIT
2011; Proc. Intl. Symp. Info. Theory (ISIT), Aug 201
Iterative Log Thresholding
Sparse reconstruction approaches using the re-weighted l1-penalty have been
shown, both empirically and theoretically, to provide a significant improvement
in recovering sparse signals in comparison to the l1-relaxation. However,
numerical optimization of such penalties involves solving problems with
l1-norms in the objective many times. Using the direct link of reweighted
l1-penalties to the concave log-regularizer for sparsity, we derive a simple
prox-like algorithm for the log-regularized formulation. The proximal splitting
step of the algorithm has a closed form solution, and we call the algorithm
'log-thresholding' in analogy to soft thresholding for the l1-penalty.
We establish convergence results, and demonstrate that log-thresholding
provides more accurate sparse reconstructions compared to both soft and hard
thresholding. Furthermore, the approach can be directly extended to
optimization over matrices with penalty for rank (i.e. the nuclear norm penalty
and its re-weigthed version), where we suggest a singular-value
log-thresholding approach.Comment: 5 pages, 4 figure
Schatten- Quasi-Norm Regularized Matrix Optimization via Iterative Reweighted Singular Value Minimization
In this paper we study general Schatten- quasi-norm (SPQN) regularized
matrix minimization problems. In particular, we first introduce a class of
first-order stationary points for them, and show that the first-order
stationary points introduced in [11] for an SPQN regularized
minimization problem are equivalent to those of an SPQN regularized
minimization reformulation. We also show that any local minimizer of the SPQN
regularized matrix minimization problems must be a first-order stationary
point. Moreover, we derive lower bounds for nonzero singular values of the
first-order stationary points and hence also of the local minimizers of the
SPQN regularized matrix minimization problems. The iterative reweighted
singular value minimization (IRSVM) methods are then proposed to solve these
problems, whose subproblems are shown to have a closed-form solution. In
contrast to the analogous methods for the SPQN regularized
minimization problems, the convergence analysis of these methods is
significantly more challenging. We develop a novel approach to establishing the
convergence of these methods, which makes use of the expression of a specific
solution of their subproblems and avoids the intricate issue of finding the
explicit expression for the Clarke subdifferential of the objective of their
subproblems. In particular, we show that any accumulation point of the sequence
generated by the IRSVM methods is a first-order stationary point of the
problems. Our computational results demonstrate that the IRSVM methods
generally outperform some recently developed state-of-the-art methods in terms
of solution quality and/or speed.Comment: This paper has been withdrawn by the author due to major revision and
correction
- β¦