9 research outputs found

    New convergence results for the scaled gradient projection method

    Get PDF
    The aim of this paper is to deepen the convergence analysis of the scaled gradient projection (SGP) method, proposed by Bonettini et al. in a recent paper for constrained smooth optimization. The main feature of SGP is the presence of a variable scaling matrix multiplying the gradient, which may change at each iteration. In the last few years, an extensive numerical experimentation showed that SGP equipped with a suitable choice of the scaling matrix is a very effective tool for solving large scale variational problems arising in image and signal processing. In spite of the very reliable numerical results observed, only a weak, though very general, convergence theorem is provided, establishing that any limit point of the sequence generated by SGP is stationary. Here, under the only assumption that the objective function is convex and that a solution exists, we prove that the sequence generated by SGP converges to a minimum point, if the scaling matrices sequence satisfies a simple and implementable condition. Moreover, assuming that the gradient of the objective function is Lipschitz continuous, we are also able to prove the O(1/k) convergence rate with respect to the objective function values. Finally, we present the results of a numerical experience on some relevant image restoration problems, showing that the proposed scaling matrix selection rule performs well also from the computational point of view

    A new steplength selection for scaled gradient methods with application to image deblurring

    Get PDF
    Gradient methods are frequently used in large scale image deblurring problems since they avoid the onerous computation of the Hessian matrix of the objective function. Second order information is typically sought by a clever choice of the steplength parameter defining the descent direction, as in the case of the well-known Barzilai and Borwein rules. In a recent paper, a strategy for the steplength selection approximating the inverse of some eigenvalues of the Hessian matrix has been proposed for gradient methods applied to unconstrained minimization problems. In the quadratic case, this approach is based on a Lanczos process applied every m iterations to the matrix of the most recent m back gradients but the idea can be extended to a general objective function. In this paper we extend this rule to the case of scaled gradient projection methods applied to non-negatively constrained minimization problems, and we test the effectiveness of the proposed strategy in image deblurring problems in both the presence and the absence of an explicit edge-preserving regularization term

    Accelerated gradient methods for the X-ray imaging of solar flares

    Full text link
    In this paper we present new optimization strategies for the reconstruction of X-ray images of solar flares by means of the data collected by the Reuven Ramaty High Energy Solar Spectroscopic Imager (RHESSI). The imaging concept of the satellite is based of rotating modulation collimator instruments, which allow the use of both Fourier imaging approaches and reconstruction techniques based on the straightforward inversion of the modulated count profiles. Although in the last decade a greater attention has been devoted to the former strategies due to their very limited computational cost, here we consider the latter model and investigate the effectiveness of different accelerated gradient methods for the solution of the corresponding constrained minimization problem. Moreover, regularization is introduced through either an early stopping of the iterative procedure, or a Tikhonov term added to the discrepancy function, by means of a discrepancy principle accounting for the Poisson nature of the noise affecting the data

    Variable metric inexact line-search based methods for nonsmooth optimization

    Get PDF
    We develop a new proximal-gradient method for minimizing the sum of a differentiable, possibly nonconvex, function plus a convex, possibly non differentiable, function. The key features of the proposed method are the definition of a suitable descent direction, based on the proximal operator associated to the convex part of the objective function, and an Armijo-like rule to determine the step size along this direction ensuring the sufficient decrease of the objective function. In this frame, we especially address the possibility of adopting a metric which may change at each iteration and an inexact computation of the proximal point defining the descent direction. For the more general nonconvex case, we prove that all limit points of the iterates sequence are stationary, while for convex objective functions we prove the convergence of the whole sequence to a minimizer, under the assumption that a minimizer exists. In the latter case, assuming also that the gradient of the smooth part of the objective function is Lipschitz, we also give a convergence rate estimate, showing the O(1/k) complexity with respect to the function values. We also discuss verifiable sufficient conditions for the inexact proximal point and we present the results of a numerical experience on a convex total variation based image restoration problem, showing that the proposed approach is competitive with another state-of-the-art method

    Inexact Bregman iteration with an application to Poisson data reconstruction

    Get PDF
    This work deals with the solution of image restoration problems by an iterative regularization method based on the Bregman iteration. Any iteration of this scheme requires to exactly compute the minimizer of a function. However, in some image reconstruction applications, it is either impossible or extremely expensive to obtain exact solutions of these subproblems. In this paper, we propose an inexact version of the iterative procedure, where the inexactness in the inner subproblem solution is controlled by a criterion that preserves the convergence of the Bregman iteration and its features in image restoration problems. In particular, the method allows to obtain accurate reconstructions also when only an overestimation of the regularization parameter is known. The introduction of the inexactness in the iterative scheme allows to address image reconstruction problems from data corrupted by Poisson noise, exploiting the recent advances about specialized algorithms for the numerical minimization of the generalized Kullback–Leibler divergence combined with a regularization term. The results of several numerical experiments enable to evaluat

    A comparison of edge-preserving approaches for differential interference contrast microscopy

    Get PDF
    In this paper we address the problem of estimating the phase from color images acquired with differential-interference-contrast microscopy. In particular, we consider the nonlinear and nonconvex optimization problem obtained by regularizing a least-squares-like discrepancy term with an edge-preserving functional, given by either the hypersurface potential or the total variation one. We investigate the analytical properties of the resulting objective functions, proving the existence of minimum points, and we propose effective optimization tools able to obtain in both the smooth and the nonsmooth case accurate reconstructions with a reduced computational demand

    Scaling techniques for gradient projection-type methods in astronomical image deblurring

    No full text
    The aim of this paper is to present a computational study on scaling techniques in gradient projectiontype (GP-type) methods for deblurring of astronomical images corrupted by Poisson noise. In this case, the imaging problem is formulated as a non-negatively constrained minimization problem in which the objective function is the sum of a fit-to-data term, the Kullback\u2013Leibler divergence, and a Tikhonov regularization term. The considered GP-type methods are formulated by a common iteration formula, where the scaling matrix and the step-length parameter characterize the different algorithms. Within this formulation, both first-order and Newton-like methods are analysed, with particular attention to those implementation features and behaviours relevant for the image restoration problem. The numerical experiments show that suited scaling strategies can enable the GP methods to quickly approximate accurate reconstructions and then are useful for designing effective image deblurring algorithms

    Scaling techniques for gradient projection-type methods in astronomical image deblurring

    Get PDF
    The aim of this paper is to present a computational study on scaling techniques in gradient projection-type (GP-type)methods for deblurring of astronomical images corrupted by Poisson noise. In this case, the imaging problem is formulated as a non-negatively constrained minimization problem in which the objective function is the sum of a fit-to-data term, the Kullback–Leibler divergence, and a Tikhonov regularization term. The considered GP-type methods are formulated by a common iteration formula, where the scaling matrix and the step-length parameter characterize the different algorithms. Within this formulation, both first-order and Newton-like methods are analysed, with particular attention to those implementation features and behaviours relevant for the image restoration problem. The numerical experiments show that suited scaling strategies can enable the GP methods to quickly approximate accurate reconstructions and then are useful for designing effective image deblurring algorithms
    corecore