132 research outputs found

    On starting and stopping criteria for nested primal-dual iterations

    Full text link
    The importance of an adequate inner loop starting point (as opposed to a sufficient inner loop stopping rule) is discussed in the context of a numerical optimization algorithm consisting of nested primal-dual proximal-gradient iterations. While the number of inner iterations is fixed in advance, convergence of the whole algorithm is still guaranteed by virtue of a warm-start strategy for the inner loop, showing that inner loop "starting rules" can be just as effective as "stopping rules" for guaranteeing convergence. The algorithm itself is applicable to the numerical solution of convex optimization problems defined by the sum of a differentiable term and two possibly non-differentiable terms. One of the latter terms should take the form of the composition of a linear map and a proximable function, while the differentiable term needs an accessible gradient. The algorithm reduces to the classical proximal gradient algorithm in certain special cases and it also generalizes other existing algorithms. In addition, under some conditions of strong convexity, we show a linear rate of convergence.Comment: 18 pages, no figure

    An iterative algorithm for sparse and constrained recovery with applications to divergence-free current reconstructions in magneto-encephalography

    Full text link
    We propose an iterative algorithm for the minimization of a ℓ1\ell_1-norm penalized least squares functional, under additional linear constraints. The algorithm is fully explicit: it uses only matrix multiplications with the three matrices present in the problem (in the linear constraint, in the data misfit part and in penalty term of the functional). None of the three matrices must be invertible. Convergence is proven in a finite-dimensional setting. We apply the algorithm to a synthetic problem in magneto-encephalography where it is used for the reconstruction of divergence-free current densities subject to a sparsity promoting penalty on the wavelet coefficients of the current densities. We discuss the effects of imposing zero divergence and of imposing joint sparsity (of the vector components of the current density) on the current density reconstruction.Comment: 21 pages, 3 figure

    Variable metric inexact line-search based methods for nonsmooth optimization

    Get PDF
    We develop a new proximal-gradient method for minimizing the sum of a differentiable, possibly nonconvex, function plus a convex, possibly non differentiable, function. The key features of the proposed method are the definition of a suitable descent direction, based on the proximal operator associated to the convex part of the objective function, and an Armijo-like rule to determine the step size along this direction ensuring the sufficient decrease of the objective function. In this frame, we especially address the possibility of adopting a metric which may change at each iteration and an inexact computation of the proximal point defining the descent direction. For the more general nonconvex case, we prove that all limit points of the iterates sequence are stationary, while for convex objective functions we prove the convergence of the whole sequence to a minimizer, under the assumption that a minimizer exists. In the latter case, assuming also that the gradient of the smooth part of the objective function is Lipschitz, we also give a convergence rate estimate, showing the O(1/k) complexity with respect to the function values. We also discuss verifiable sufficient conditions for the inexact proximal point and we present the results of a numerical experience on a convex total variation based image restoration problem, showing that the proposed approach is competitive with another state-of-the-art method

    Convergence analysis of a primal-dual optimization-by-continuation algorithm

    Full text link
    We present a numerical iterative optimization algorithm for the minimization of a cost function consisting of a linear combination of three convex terms, one of which is differentiable, a second one is prox-simple and the third one is the composition of a linear map and a prox-simple function. The algorithm's special feature lies in its ability to approximate, in a single iteration run, the minimizers of the cost function for many different values of the parameters determining the relative weight of the three terms in the cost function. A proof of convergence of the algorithm, based on an inexact variable metric approach, is also provided. As a special case, one recovers a generalization of the primal-dual algorithm of Chambolle and Pock, and also of the proximal-gradient algorithm. Finally, we show how it is related to a primal-dual iterative algorithm based on inexact proximal evaluations of the non-smooth terms of the cost function.Comment: 22 pages, 2 figure

    Practical error estimates for sparse recovery in linear inverse problems

    Full text link
    The effectiveness of using model sparsity as a priori information when solving linear inverse problems is studied. We investigate the reconstruction quality of such a method in the non-idealized case and compute some typical recovery errors (depending on the sparsity of the desired solution, the number of data, the noise level on the data, and various properties of the measurement matrix); they are compared to known theoretical bounds and illustrated on a magnetic tomography example.Comment: 11 pages, 5 figure

    Tomographic inversion using ℓ1\ell_1-norm regularization of wavelet coefficients

    Full text link
    We propose the use of ℓ1\ell_1 regularization in a wavelet basis for the solution of linearized seismic tomography problems Am=dAm=d, allowing for the possibility of sharp discontinuities superimposed on a smoothly varying background. An iterative method is used to find a sparse solution mm that contains no more fine-scale structure than is necessary to fit the data dd to within its assigned errors.Comment: 19 pages, 14 figures. Submitted to GJI July 2006. This preprint does not use GJI style files (which gives wrong received/accepted dates). Corrected typ

    On the convergence of a linesearch based proximal-gradient method for nonconvex optimization

    Get PDF
    We consider a variable metric linesearch based proximal gradient method for the minimization of the sum of a smooth, possibly nonconvex function plus a convex, possibly nonsmooth term. We prove convergence of this iterative algorithm to a critical point if the objective function satisfies the Kurdyka-Lojasiewicz property at each point of its domain, under the assumption that a limit point exists. The proposed method is applied to a wide collection of image processing problems and our numerical tests show that our algorithm results to be flexible, robust and competitive when compared to recently proposed approaches able to address the optimization problems arising in the considered applications

    Sparse and stable Markowitz portfolios

    Get PDF
    We consider the problem of portfolio selection within the classical Markowitz meanvariance optimizing framework, which has served as the basis for modern portfolio theory for more than 50 years. Efforts to translate this theoretical foundation into a viable portfolio construction algorithm have been plagued by technical difficulties stemming from the instability of the original optimization problem with respect to the available data. Often, instabilities of this type disappear when a regularizing constraint or penalty term is incorporated in the optimization procedure. This approach seems not to have been used in portfolio design until very recently. To provide such a stabilization, we propose to add to the Markowitz objective function a penalty which is proportional to the sum of the absolute values of the portfolio weights. This penalty stabilizes the optimization problem, automatically encourages sparse portfolios, and facilitates an effective treatment of transaction costs. We implement our methodology using as our securities two sets of portfolios constructed by Fama and French: the 48 industry portfolios and 100 portfolios formed on size and book-to-market. Using only a modest amount of training data, we construct portfolios whose out-of-sample performance, as measured by Sharpe ratio, is consistently and significantly better than that of the naĂŻve portfolio comprising equal investments in each available asset. In addition to their excellent performance, these portfolios have only a small number of active positions, a desirable feature for small investors, for whom the fixed overhead portion of the transaction cost is not negligible. JEL Classification: G11, C00Penalized Regression, Portfolio Choice, Sparse Portfolio

    Wavelets and wavelet-like transforms on the sphere and their application to geophysical data inversion

    Full text link
    Many flexible parameterizations exist to represent data on the sphere. In addition to the venerable spherical harmonics, we have the Slepian basis, harmonic splines, wavelets and wavelet-like Slepian frames. In this paper we focus on the latter two: spherical wavelets developed for geophysical applications on the cubed sphere, and the Slepian "tree", a new construction that combines a quadratic concentration measure with wavelet-like multiresolution. We discuss the basic features of these mathematical tools, and illustrate their applicability in parameterizing large-scale global geophysical (inverse) problems.Comment: 15 pages, 11 figures, submitted to the Proceedings of the SPIE 2011 conference Wavelets and Sparsity XI
    • …
    corecore