107 research outputs found
An iterative algorithm for sparse and constrained recovery with applications to divergence-free current reconstructions in magneto-encephalography
We propose an iterative algorithm for the minimization of a -norm
penalized least squares functional, under additional linear constraints. The
algorithm is fully explicit: it uses only matrix multiplications with the three
matrices present in the problem (in the linear constraint, in the data misfit
part and in penalty term of the functional). None of the three matrices must be
invertible. Convergence is proven in a finite-dimensional setting. We apply the
algorithm to a synthetic problem in magneto-encephalography where it is used
for the reconstruction of divergence-free current densities subject to a
sparsity promoting penalty on the wavelet coefficients of the current
densities. We discuss the effects of imposing zero divergence and of imposing
joint sparsity (of the vector components of the current density) on the current
density reconstruction.Comment: 21 pages, 3 figure
Variable metric inexact line-search based methods for nonsmooth optimization
We develop a new proximal-gradient method for minimizing the sum of a
differentiable, possibly nonconvex, function plus a convex, possibly non
differentiable, function. The key features of the proposed method are the
definition of a suitable descent direction, based on the proximal operator
associated to the convex part of the objective function, and an Armijo-like
rule to determine the step size along this direction ensuring the sufficient
decrease of the objective function. In this frame, we especially address the
possibility of adopting a metric which may change at each iteration and an
inexact computation of the proximal point defining the descent direction. For
the more general nonconvex case, we prove that all limit points of the iterates
sequence are stationary, while for convex objective functions we prove the
convergence of the whole sequence to a minimizer, under the assumption that a
minimizer exists. In the latter case, assuming also that the gradient of the
smooth part of the objective function is Lipschitz, we also give a convergence
rate estimate, showing the O(1/k) complexity with respect to the function
values. We also discuss verifiable sufficient conditions for the inexact
proximal point and we present the results of a numerical experience on a convex
total variation based image restoration problem, showing that the proposed
approach is competitive with another state-of-the-art method
Convergence analysis of a primal-dual optimization-by-continuation algorithm
We present a numerical iterative optimization algorithm for the minimization
of a cost function consisting of a linear combination of three convex terms,
one of which is differentiable, a second one is prox-simple and the third one
is the composition of a linear map and a prox-simple function. The algorithm's
special feature lies in its ability to approximate, in a single iteration run,
the minimizers of the cost function for many different values of the parameters
determining the relative weight of the three terms in the cost function. A
proof of convergence of the algorithm, based on an inexact variable metric
approach, is also provided. As a special case, one recovers a generalization of
the primal-dual algorithm of Chambolle and Pock, and also of the
proximal-gradient algorithm. Finally, we show how it is related to a
primal-dual iterative algorithm based on inexact proximal evaluations of the
non-smooth terms of the cost function.Comment: 22 pages, 2 figure
Practical error estimates for sparse recovery in linear inverse problems
The effectiveness of using model sparsity as a priori information when
solving linear inverse problems is studied. We investigate the reconstruction
quality of such a method in the non-idealized case and compute some typical
recovery errors (depending on the sparsity of the desired solution, the number
of data, the noise level on the data, and various properties of the measurement
matrix); they are compared to known theoretical bounds and illustrated on a
magnetic tomography example.Comment: 11 pages, 5 figure
Tomographic inversion using -norm regularization of wavelet coefficients
We propose the use of regularization in a wavelet basis for the
solution of linearized seismic tomography problems , allowing for the
possibility of sharp discontinuities superimposed on a smoothly varying
background. An iterative method is used to find a sparse solution that
contains no more fine-scale structure than is necessary to fit the data to
within its assigned errors.Comment: 19 pages, 14 figures. Submitted to GJI July 2006. This preprint does
not use GJI style files (which gives wrong received/accepted dates).
Corrected typ
On the convergence of a linesearch based proximal-gradient method for nonconvex optimization
We consider a variable metric linesearch based proximal gradient method for
the minimization of the sum of a smooth, possibly nonconvex function plus a
convex, possibly nonsmooth term. We prove convergence of this iterative
algorithm to a critical point if the objective function satisfies the
Kurdyka-Lojasiewicz property at each point of its domain, under the assumption
that a limit point exists. The proposed method is applied to a wide collection
of image processing problems and our numerical tests show that our algorithm
results to be flexible, robust and competitive when compared to recently
proposed approaches able to address the optimization problems arising in the
considered applications
Wavelets and wavelet-like transforms on the sphere and their application to geophysical data inversion
Many flexible parameterizations exist to represent data on the sphere. In
addition to the venerable spherical harmonics, we have the Slepian basis,
harmonic splines, wavelets and wavelet-like Slepian frames. In this paper we
focus on the latter two: spherical wavelets developed for geophysical
applications on the cubed sphere, and the Slepian "tree", a new construction
that combines a quadratic concentration measure with wavelet-like
multiresolution. We discuss the basic features of these mathematical tools, and
illustrate their applicability in parameterizing large-scale global geophysical
(inverse) problems.Comment: 15 pages, 11 figures, submitted to the Proceedings of the SPIE 2011
conference Wavelets and Sparsity XI
Accelerated Projected Gradient Method for Linear Inverse Problems with Sparsity Constraints
Regularization of ill-posed linear inverse problems via penalization
has been proposed for cases where the solution is known to be (almost) sparse.
One way to obtain the minimizer of such an penalized functional is via
an iterative soft-thresholding algorithm. We propose an alternative
implementation to -constraints, using a gradient method, with
projection on -balls. The corresponding algorithm uses again iterative
soft-thresholding, now with a variable thresholding parameter. We also propose
accelerated versions of this iterative method, using ingredients of the
(linear) steepest descent method. We prove convergence in norm for one of these
projected gradient methods, without and with acceleration.Comment: 24 pages, 5 figures. v2: added reference, some amendments, 27 page
- …