3,366 research outputs found

    A new steplength selection for scaled gradient methods with application to image deblurring

    Get PDF
    Gradient methods are frequently used in large scale image deblurring problems since they avoid the onerous computation of the Hessian matrix of the objective function. Second order information is typically sought by a clever choice of the steplength parameter defining the descent direction, as in the case of the well-known Barzilai and Borwein rules. In a recent paper, a strategy for the steplength selection approximating the inverse of some eigenvalues of the Hessian matrix has been proposed for gradient methods applied to unconstrained minimization problems. In the quadratic case, this approach is based on a Lanczos process applied every m iterations to the matrix of the most recent m back gradients but the idea can be extended to a general objective function. In this paper we extend this rule to the case of scaled gradient projection methods applied to non-negatively constrained minimization problems, and we test the effectiveness of the proposed strategy in image deblurring problems in both the presence and the absence of an explicit edge-preserving regularization term

    The Popescu–Gabriel theorem for triangulated categories

    Get PDF
    AbstractThe Popescu–Gabriel theorem states that each Grothendieck abelian category is a localization of a module category. In this paper, we prove an analogue where Grothendieck abelian categories are replaced by triangulated categories which are well generated (in the sense of Neeman) and algebraic (in the sense of Keller). The role of module categories is played by derived categories of small differential graded categories. An analogous result for topological triangulated categories has recently been obtained by A. Heider

    Variable metric inexact line-search based methods for nonsmooth optimization

    Get PDF
    We develop a new proximal-gradient method for minimizing the sum of a differentiable, possibly nonconvex, function plus a convex, possibly non differentiable, function. The key features of the proposed method are the definition of a suitable descent direction, based on the proximal operator associated to the convex part of the objective function, and an Armijo-like rule to determine the step size along this direction ensuring the sufficient decrease of the objective function. In this frame, we especially address the possibility of adopting a metric which may change at each iteration and an inexact computation of the proximal point defining the descent direction. For the more general nonconvex case, we prove that all limit points of the iterates sequence are stationary, while for convex objective functions we prove the convergence of the whole sequence to a minimizer, under the assumption that a minimizer exists. In the latter case, assuming also that the gradient of the smooth part of the objective function is Lipschitz, we also give a convergence rate estimate, showing the O(1/k) complexity with respect to the function values. We also discuss verifiable sufficient conditions for the inexact proximal point and we present the results of a numerical experience on a convex total variation based image restoration problem, showing that the proposed approach is competitive with another state-of-the-art method

    On the filtering effect of iterative regularization algorithms for linear least-squares problems

    Full text link
    Many real-world applications are addressed through a linear least-squares problem formulation, whose solution is calculated by means of an iterative approach. A huge amount of studies has been carried out in the optimization field to provide the fastest methods for the reconstruction of the solution, involving choices of adaptive parameters and scaling matrices. However, in presence of an ill-conditioned model and real data, the need of a regularized solution instead of the least-squares one changed the point of view in favour of iterative algorithms able to combine a fast execution with a stable behaviour with respect to the restoration error. In this paper we want to analyze some classical and recent gradient approaches for the linear least-squares problem by looking at their way of filtering the singular values, showing in particular the effects of scaling matrices and non-negative constraints in recovering the correct filters of the solution
    • …
    corecore