936 research outputs found
MAGMA: Multi-level accelerated gradient mirror descent algorithm for large-scale convex composite minimization
Composite convex optimization models arise in several applications, and are
especially prevalent in inverse problems with a sparsity inducing norm and in
general convex optimization with simple constraints. The most widely used
algorithms for convex composite models are accelerated first order methods,
however they can take a large number of iterations to compute an acceptable
solution for large-scale problems. In this paper we propose to speed up first
order methods by taking advantage of the structure present in many applications
and in image processing in particular. Our method is based on multi-level
optimization methods and exploits the fact that many applications that give
rise to large scale models can be modelled using varying degrees of fidelity.
We use Nesterov's acceleration techniques together with the multi-level
approach to achieve convergence rate, where
denotes the desired accuracy. The proposed method has a better
convergence rate than any other existing multi-level method for convex
problems, and in addition has the same rate as accelerated methods, which is
known to be optimal for first-order methods. Moreover, as our numerical
experiments show, on large-scale face recognition problems our algorithm is
several times faster than the state of the art
Successive Concave Sparsity Approximation for Compressed Sensing
In this paper, based on a successively accuracy-increasing approximation of
the norm, we propose a new algorithm for recovery of sparse vectors
from underdetermined measurements. The approximations are realized with a
certain class of concave functions that aggressively induce sparsity and their
closeness to the norm can be controlled. We prove that the series of
the approximations asymptotically coincides with the and
norms when the approximation accuracy changes from the worst fitting to the
best fitting. When measurements are noise-free, an optimization scheme is
proposed which leads to a number of weighted minimization programs,
whereas, in the presence of noise, we propose two iterative thresholding
methods that are computationally appealing. A convergence guarantee for the
iterative thresholding method is provided, and, for a particular function in
the class of the approximating functions, we derive the closed-form
thresholding operator. We further present some theoretical analyses via the
restricted isometry, null space, and spherical section properties. Our
extensive numerical simulations indicate that the proposed algorithm closely
follows the performance of the oracle estimator for a range of sparsity levels
wider than those of the state-of-the-art algorithms.Comment: Submitted to IEEE Trans. on Signal Processin
Compressed matched filter for non-Gaussian noise
We consider estimation of a deterministic unknown parameter vector in a
linear model with non-Gaussian noise. In the Gaussian case, dimensionality
reduction via a linear matched filter provides a simple low dimensional
sufficient statistic which can be easily communicated and/or stored for future
inference. Such a statistic is usually unknown in the general non-Gaussian
case. Instead, we propose a hybrid matched filter coupled with a randomized
compressed sensing procedure, which together create a low dimensional
statistic. We also derive a complementary algorithm for robust reconstruction
given this statistic. Our recovery method is based on the fast iterative
shrinkage and thresholding algorithm which is used for outlier rejection given
the compressed data. We demonstrate the advantages of the proposed framework
using synthetic simulations
- …