28 research outputs found

    Characterizations of Super-regularity and its Variants

    Full text link
    Convergence of projection-based methods for nonconvex set feasibility problems has been established for sets with ever weaker regularity assumptions. What has not kept pace with these developments is analogous results for convergence of optimization problems with correspondingly weak assumptions on the value functions. Indeed, one of the earliest classes of nonconvex sets for which convergence results were obtainable, the class of so-called super-regular sets introduced by Lewis, Luke and Malick (2009), has no functional counterpart. In this work, we amend this gap in the theory by establishing the equivalence between a property slightly stronger than super-regularity, which we call Clarke super-regularity, and subsmootheness of sets as introduced by Aussel, Daniilidis and Thibault (2004). The bridge to functions shows that approximately convex functions studied by Ngai, Luc and Th\'era (2000) are those which have Clarke super-regular epigraphs. Further classes of regularity of functions based on the corresponding regularity of their epigraph are also discussed.Comment: 15 pages, 2 figure

    A Bregman Method for Structure Learning on Sparse Directed Acyclic Graphs

    Full text link
    We develop a Bregman proximal gradient method for structure learning on linear structural causal models. While the problem is non-convex, has high curvature and is in fact NP-hard, Bregman gradient methods allow us to neutralize at least part of the impact of curvature by measuring smoothness against a highly nonlinear kernel. This allows the method to make longer steps and significantly improves convergence. Each iteration requires solving a Bregman proximal step which is convex and efficiently solvable for our particular choice of kernel. We test our method on various synthetic and real data sets

    Bregman Proximal Gradient Algorithm with Extrapolation for a class of Nonconvex Nonsmooth Minimization Problems

    Get PDF
    In this paper, we consider an accelerated method for solving nonconvex and nonsmooth minimization problems. We propose a Bregman Proximal Gradient algorithm with extrapolation(BPGe). This algorithm extends and accelerates the Bregman Proximal Gradient algorithm (BPG), which circumvents the restrictive global Lipschitz gradient continuity assumption needed in Proximal Gradient algorithms (PG). The BPGe algorithm has higher generality than the recently introduced Proximal Gradient algorithm with extrapolation(PGe), and besides, due to the extrapolation step, BPGe converges faster than BPG algorithm. Analyzing the convergence, we prove that any limit point of the sequence generated by BPGe is a stationary point of the problem by choosing parameters properly. Besides, assuming Kurdyka-{\'L}ojasiewicz property, we prove the whole sequences generated by BPGe converges to a stationary point. Finally, to illustrate the potential of the new method BPGe, we apply it to two important practical problems that arise in many fundamental applications (and that not satisfy global Lipschitz gradient continuity assumption): Poisson linear inverse problems and quadratic inverse problems. In the tests the accelerated BPGe algorithm shows faster convergence results, giving an interesting new algorithm.Comment: Preprint submitted for publication, February 14, 201

    On Inexact Solution of Auxiliary Problems in Tensor Methods for Convex Optimization

    Full text link
    In this paper we study the auxiliary problems that appear in pp-order tensor methods for unconstrained minimization of convex functions with ν\nu-H\"{o}lder continuous ppth derivatives. This type of auxiliary problems corresponds to the minimization of a (p+ν)(p+\nu)-order regularization of the ppth order Taylor approximation of the objective. For the case p=3p=3, we consider the use of Gradient Methods with Bregman distance. When the regularization parameter is sufficiently large, we prove that the referred methods take at most O(log(ϵ1))\mathcal{O}(\log(\epsilon^{-1})) iterations to find either a suitable approximate stationary point of the tensor model or an ϵ\epsilon-approximate stationary point of the original objective function
    corecore