6,146 research outputs found
A new steplength selection for scaled gradient methods with application to image deblurring
Gradient methods are frequently used in large scale image deblurring problems
since they avoid the onerous computation of the Hessian matrix of the objective
function. Second order information is typically sought by a clever choice of
the steplength parameter defining the descent direction, as in the case of the
well-known Barzilai and Borwein rules. In a recent paper, a strategy for the
steplength selection approximating the inverse of some eigenvalues of the
Hessian matrix has been proposed for gradient methods applied to unconstrained
minimization problems. In the quadratic case, this approach is based on a
Lanczos process applied every m iterations to the matrix of the most recent m
back gradients but the idea can be extended to a general objective function. In
this paper we extend this rule to the case of scaled gradient projection
methods applied to non-negatively constrained minimization problems, and we
test the effectiveness of the proposed strategy in image deblurring problems in
both the presence and the absence of an explicit edge-preserving regularization
term
A Singular Value Thresholding Algorithm for Matrix Completion
This paper introduces a novel algorithm to approximate the matrix with minimum
nuclear norm among all matrices obeying a set of convex constraints. This problem may be understood
as the convex relaxation of a rank minimization problem and arises in many important
applications as in the task of recovering a large matrix from a small subset of its entries (the famous
Netflix problem). Off-the-shelf algorithms such as interior point methods are not directly amenable
to large problems of this kind with over a million unknown entries. This paper develops a simple
first-order and easy-to-implement algorithm that is extremely efficient at addressing problems in
which the optimal solution has low rank. The algorithm is iterative, produces a sequence of matrices
{X^k,Y^k}, and at each step mainly performs a soft-thresholding operation on the singular values
of the matrix Y^k. There are two remarkable features making this attractive for low-rank matrix
completion problems. The first is that the soft-thresholding operation is applied to a sparse matrix;
the second is that the rank of the iterates {X^k} is empirically nondecreasing. Both these facts allow
the algorithm to make use of very minimal storage space and keep the computational cost of each
iteration low. On the theoretical side, we provide a convergence analysis showing that the sequence
of iterates converges. On the practical side, we provide numerical examples in which 1,000 × 1,000
matrices are recovered in less than a minute on a modest desktop computer. We also demonstrate
that our approach is amenable to very large scale problems by recovering matrices of rank about
10 with nearly a billion unknowns from just about 0.4% of their sampled entries. Our methods are
connected with the recent literature on linearized Bregman iterations for ℓ_1 minimization, and we
develop a framework in which one can understand these algorithms in terms of well-known Lagrange
multiplier algorithms
Fast Image Recovery Using Variable Splitting and Constrained Optimization
We propose a new fast algorithm for solving one of the standard formulations
of image restoration and reconstruction which consists of an unconstrained
optimization problem where the objective includes an data-fidelity
term and a non-smooth regularizer. This formulation allows both wavelet-based
(with orthogonal or frame-based representations) regularization or
total-variation regularization. Our approach is based on a variable splitting
to obtain an equivalent constrained optimization formulation, which is then
addressed with an augmented Lagrangian method. The proposed algorithm is an
instance of the so-called "alternating direction method of multipliers", for
which convergence has been proved. Experiments on a set of image restoration
and reconstruction benchmark problems show that the proposed algorithm is
faster than the current state of the art methods.Comment: Submitted; 11 pages, 7 figures, 6 table
Sparse Recovery via Differential Inclusions
In this paper, we recover sparse signals from their noisy linear measurements
by solving nonlinear differential inclusions, which is based on the notion of
inverse scale space (ISS) developed in applied mathematics. Our goal here is to
bring this idea to address a challenging problem in statistics, \emph{i.e.}
finding the oracle estimator which is unbiased and sign-consistent using
dynamics. We call our dynamics \emph{Bregman ISS} and \emph{Linearized Bregman
ISS}. A well-known shortcoming of LASSO and any convex regularization
approaches lies in the bias of estimators. However, we show that under proper
conditions, there exists a bias-free and sign-consistent point on the solution
paths of such dynamics, which corresponds to a signal that is the unbiased
estimate of the true signal and whose entries have the same signs as those of
the true signs, \emph{i.e.} the oracle estimator. Therefore, their solution
paths are regularization paths better than the LASSO regularization path, since
the points on the latter path are biased when sign-consistency is reached. We
also show how to efficiently compute their solution paths in both continuous
and discretized settings: the full solution paths can be exactly computed piece
by piece, and a discretization leads to \emph{Linearized Bregman iteration},
which is a simple iterative thresholding rule and easy to parallelize.
Theoretical guarantees such as sign-consistency and minimax optimal -error
bounds are established in both continuous and discrete settings for specific
points on the paths. Early-stopping rules for identifying these points are
given. The key treatment relies on the development of differential inequalities
for differential inclusions and their discretizations, which extends the
previous results and leads to exponentially fast recovering of sparse signals
before selecting wrong ones.Comment: In Applied and Computational Harmonic Analysis, 201
- …