2,998 research outputs found

    On Quasi-Newton Forward--Backward Splitting: Proximal Calculus and Convergence

    Get PDF
    We introduce a framework for quasi-Newton forward--backward splitting algorithms (proximal quasi-Newton methods) with a metric induced by diagonal ±\pm rank-rr symmetric positive definite matrices. This special type of metric allows for a highly efficient evaluation of the proximal mapping. The key to this efficiency is a general proximal calculus in the new metric. By using duality, formulas are derived that relate the proximal mapping in a rank-rr modified metric to the original metric. We also describe efficient implementations of the proximity calculation for a large class of functions; the implementations exploit the piece-wise linear nature of the dual problem. Then, we apply these results to acceleration of composite convex minimization problems, which leads to elegant quasi-Newton methods for which we prove convergence. The algorithm is tested on several numerical examples and compared to a comprehensive list of alternatives in the literature. Our quasi-Newton splitting algorithm with the prescribed metric compares favorably against state-of-the-art. The algorithm has extensive applications including signal processing, sparse recovery, machine learning and classification to name a few.Comment: arXiv admin note: text overlap with arXiv:1206.115

    Splitting methods with variable metric for KL functions

    Full text link
    We study the convergence of general abstract descent methods applied to a lower semicontinuous nonconvex function f that satisfies the Kurdyka-Lojasiewicz inequality in a Hilbert space. We prove that any precompact sequence converges to a critical point of f and obtain new convergence rates both for the values and the iterates. The analysis covers alternating versions of the forward-backward method with variable metric and relative errors. As an example, a nonsmooth and nonconvex version of the Levenberg-Marquardt algorithm is detailled

    Douglas-Rachford Splitting: Complexity Estimates and Accelerated Variants

    Full text link
    We propose a new approach for analyzing convergence of the Douglas-Rachford splitting method for solving convex composite optimization problems. The approach is based on a continuously differentiable function, the Douglas-Rachford Envelope (DRE), whose stationary points correspond to the solutions of the original (possibly nonsmooth) problem. By proving the equivalence between the Douglas-Rachford splitting method and a scaled gradient method applied to the DRE, results from smooth unconstrained optimization are employed to analyze convergence properties of DRS, to tune the method and to derive an accelerated version of it

    Generalized Forward-Backward Splitting with Penalization for Monotone Inclusion Problems

    Full text link
    We introduce a generalized forward-backward splitting method with penalty term for solving monotone inclusion problems involving the sum of a finite number of maximally monotone operators and the normal cone to the nonempty set of zeros of another maximal monotone operator. We show weak ergodic convergence of the generated sequence of iterates to a solution of the considered monotone inclusion problem, provided the condition corresponded to the Fitzpatrick function of the operator describing the set of the normal cone is fulfilled. Under strong monotonicity of an operator, we show strong convergence of the iterates. Furthermore, we utilize the proposed method for minimizing a large-scale hierarchical minimization problem concerning the sum of differentiable and nondifferentiable convex functions subject to the set of minima of another differentiable convex function. We illustrate the functionality of the method through numerical experiments addressing constrained elastic net and generalized Heron location problems

    Forward-backward truncated Newton methods for convex composite optimization

    Full text link
    This paper proposes two proximal Newton-CG methods for convex nonsmooth optimization problems in composite form. The algorithms are based on a a reformulation of the original nonsmooth problem as the unconstrained minimization of a continuously differentiable function, namely the forward-backward envelope (FBE). The first algorithm is based on a standard line search strategy, whereas the second one combines the global efficiency estimates of the corresponding first-order methods, while achieving fast asymptotic convergence rates. Furthermore, they are computationally attractive since each Newton iteration requires the approximate solution of a linear system of usually small dimension

    Euclid in a Taxicab: Sparse Blind Deconvolution with Smoothed l1/l2 Regularization

    Get PDF
    The l1/l2 ratio regularization function has shown good performance for retrieving sparse signals in a number of recent works, in the context of blind deconvolution. Indeed, it benefits from a scale invariance property much desirable in the blind context. However, the l1/l2 function raises some difficulties when solving the nonconvex and nonsmooth minimization problems resulting from the use of such a penalty term in current restoration methods. In this paper, we propose a new penalty based on a smooth approximation to the l1/l2 function. In addition, we develop a proximal-based algorithm to solve variational problems involving this function and we derive theoretical convergence results. We demonstrate the effectiveness of our method through a comparison with a recent alternating optimization strategy dealing with the exact l1/l2 term, on an application to seismic data blind deconvolution.Comment: 5 page
    corecore