465 research outputs found

    An inertial forward-backward algorithm for the minimization of the sum of two nonconvex functions

    Full text link
    We propose a forward-backward proximal-type algorithm with inertial/memory effects for minimizing the sum of a nonsmooth function with a smooth one in the nonconvex setting. The sequence of iterates generated by the algorithm converges to a critical point of the objective function provided an appropriate regularization of the objective satisfies the Kurdyka-\L{}ojasiewicz inequality, which is for instance fulfilled for semi-algebraic functions. We illustrate the theoretical results by considering two numerical experiments: the first one concerns the ability of recovering the local optimal solutions of nonconvex optimization problems, while the second one refers to the restoration of a noisy blurred image.Comment: arXiv admin note: substantial text overlap with arXiv:1406.072

    An inertial Tseng's type proximal algorithm for nonsmooth and nonconvex optimization problems

    Full text link
    We investigate the convergence of a forward-backward-forward proximal-type algorithm with inertial and memory effects when minimizing the sum of a nonsmooth function with a smooth one in the absence of convexity. The convergence is obtained provided an appropriate regularization of the objective satisfies the Kurdyka-\L{}ojasiewicz inequality, which is for instance fulfilled for semi-algebraic functions

    Local Convergence of the Heavy-ball Method and iPiano for Non-convex Optimization

    Full text link
    A local convergence result for abstract descent methods is proved. The sequence of iterates is attracted by a local (or global) minimum, stays in its neighborhood and converges within this neighborhood. This result allows algorithms to exploit local properties of the objective function. In particular, the abstract theory in this paper applies to the inertial forward--backward splitting method: iPiano---a generalization of the Heavy-ball method. Moreover, it reveals an equivalence between iPiano and inertial averaged/alternating proximal minimization and projection methods. Key for this equivalence is the attraction to a local minimum within a neighborhood and the fact that, for a prox-regular function, the gradient of the Moreau envelope is locally Lipschitz continuous and expressible in terms of the proximal mapping. In a numerical feasibility problem, the inertial alternating projection method significantly outperforms its non-inertial variants

    Unifying abstract inexact convergence theorems and block coordinate variable metric iPiano

    Full text link
    An abstract convergence theorem for a class of generalized descent methods that explicitly models relative errors is proved. The convergence theorem generalizes and unifies several recent abstract convergence theorems. It is applicable to possibly non-smooth and non-convex lower semi-continuous functions that satisfy the Kurdyka--Lojasiewicz (KL) inequality, which comprises a huge class of problems. Most of the recent algorithms that explicitly prove convergence using the KL inequality can cast into the abstract framework in this paper and, therefore, the generated sequence converges to a stationary point of the objective function. Additional flexibility compared to related approaches is gained by a descent property that is formulated with respect to a function that is allowed to change along the iterations, a generic distance measure, and an explicit/implicit relative error condition with respect to finite linear combinations of distance terms. As an application of the gained flexibility, the convergence of a block coordinate variable metric version of iPiano (an inertial forward--backward splitting algorithm) is proved, which performs favorably on an inpainting problem with a Mumford--Shah-like regularization from image processing

    A block inertial Bregman proximal algorithm for nonsmooth nonconvex problems with application to symmetric nonnegative matrix tri-factorization

    Full text link
    We propose BIBPA, a block inertial Bregman proximal algorithm for minimizing the sum of a block relatively smooth function (that is, relatively smooth concerning each block) and block separable nonsmooth nonconvex functions. We prove that the sequence generated by BIBPA subsequentially converges to critical points of the objective under standard assumptions, and globally converges when the objective function is additionally assumed to satisfy the Kurdyka-{\L}ojasiewicz (K{\L}) property. We also provide the convergence rate when the objective satisfies the {\L}ojasiewicz inequality. We apply BIBPA to the symmetric nonnegative matrix tri-factorization (SymTriNMF) problem, where we propose kernel functions for SymTriNMF and provide closed-form solutions for subproblems of BIBPA.Comment: 18 page

    A forward-backward dynamical approach to the minimization of the sum of a nonsmooth convex with a smooth nonconvex function

    Full text link
    We address the minimization of the sum of a proper, convex and lower semicontinuous with a (possibly nonconvex) smooth function from the perspective of an implicit dynamical system of forward-backward type. The latter is formulated by means of the gradient of the smooth function and of the proximal point operator of the nonsmooth one. The trajectory generated by the dynamical system is proved to asymptotically converge to a critical point of the objective, provided a regularization of the latter satisfies the Kurdyka-\L{}ojasiewicz property. Convergence rates for the trajectory in terms of the \L{}ojasiewicz exponent of the regularized objective function are also provided

    Forward-backward envelope for the sum of two nonconvex functions: Further properties and nonmonotone line-search algorithms

    Full text link
    We propose ZeroFPR, a nonmonotone linesearch algorithm for minimizing the sum of two nonconvex functions, one of which is smooth and the other possibly nonsmooth. ZeroFPR is the first algorithm that, despite being fit for fully nonconvex problems and requiring only the black-box oracle of forward-backward splitting (FBS) --- namely evaluations of the gradient of the smooth term and of the proximity operator of the nonsmooth one --- achieves superlinear convergence rates under mild assumptions at the limit point when the linesearch directions satisfy a Dennis-Mor\'e condition, and we show that this is the case for quasi-Newton directions. Our approach is based on the forward-backward envelope (FBE), an exact and strictly continuous penalty function for the original cost. Extending previous results we show that, despite being nonsmooth for fully nonconvex problems, the FBE still enjoys favorable first- and second-order properties which are key for the convergence results of ZeroFPR. Our theoretical results are backed up by promising numerical simulations. On large-scale problems, by computing linesearch directions using limited-memory quasi-Newton updates our algorithm greatly outperforms FBS and its accelerated variant (AFBS)

    Composite Optimization by Nonconvex Majorization-Minimization

    Full text link
    The minimization of a nonconvex composite function can model a variety of imaging tasks. A popular class of algorithms for solving such problems are majorization-minimization techniques which iteratively approximate the composite nonconvex function by a majorizing function that is easy to minimize. Most techniques, e.g. gradient descent, utilize convex majorizers in order to guarantee that the majorizer is easy to minimize. In our work we consider a natural class of nonconvex majorizers for these functions, and show that these majorizers are still sufficient for a globally convergent optimization scheme. Numerical results illustrate that by applying this scheme, one can often obtain superior local optima compared to previous majorization-minimization methods, when the nonconvex majorizers are solved to global optimality. Finally, we illustrate the behavior of our algorithm for depth super-resolution from raw time-of-flight data.Comment: 38 pages, 12 figures, accepted for publication in SIIM

    An accelerated proximal iterative hard thresholding method for â„“0\ell_0 minimization

    Full text link
    In this paper, we consider a non-convex problem which is the sum of â„“0\ell_0-norm and a convex smooth function under box constraint. We propose one proximal iterative hard thresholding type method with extrapolation step used for acceleration and establish its global convergence results. In detail, the sequence generated by the proposed method globally converges to a local minimizer of the objective function. Finally, we conduct numerical experiments to show the proposed method's effectiveness on comparison with some other efficient methods

    Penalty schemes with inertial effects for monotone inclusion problems

    Full text link
    We introduce a penalty term-based splitting algorithm with inertial effects designed for solving monotone inclusion problems involving the sum of maximally monotone operators and the convex normal cone to the (nonempty) set of zeros of a monotone and Lipschitz continuous operator. We show weak ergodic convergence of the generated sequence of iterates to a solution of the monotone inclusion problem, provided a condition expressed via the Fitzpatrick function of the operator describing the underlying set of the normal cone is verified. Under strong monotonicity assumptions we can even show strong nonergodic convergence of the iterates. This approach constitutes the starting point for investigating from a similar perspective monotone inclusion problems involving linear compositions of parallel-sum operators and, further, for the minimization of a complexly structured convex objective function subject to the set of minima of another convex and differentiable function.Comment: arXiv admin note: text overlap with arXiv:1306.035
    • …
    corecore