60 research outputs found

    The generalized bregman distance

    Get PDF
    Recently, a new kind of distance has been introduced for the graphs of two point-to-set operators, one of which is maximally monotone. When both operators are the subdifferential of a proper lower semicontinuous convex function, this kind of distance specializes under modest assumptions to the classical Bregman distance. We name this new kind of distance the generalized Bregman distance, and we shed light on it with examples that utilize the other two most natural representative functions: the Fitzpatrick function and its conjugate. We provide sufficient conditions for convexity, coercivity, and supercoercivity: properties which are essential for implementation in proximal point type algorithms. We establish these results for both the left and right variants of this new kind of distance. We construct examples closely related to the Kullback-Leibler divergence, which was previously considered in the context of Bregman distances and whose importance in information theory is well known. In so doing, we demonstrate how to compute a difficult Fitzpatrick conjugate function, and we discover natural occurrences of the Lambert \scrW function, whose importance in optimization is of growing interest. © 2021 Society for Industrial and Applied Mathematic

    Generalized bregman envelopes and proximity operators

    Get PDF
    Every maximally monotone operator can be associated with a family of convex functions, called the Fitzpatrick family or family of representative functions. Surprisingly, in 2017, Burachik and Martínez-Legaz showed that the well-known Bregman distance is a particular case of a general family of distances, each one induced by a specific maximally monotone operator and a specific choice of one of its representative functions. For the family of generalized Bregman distances, sufficient conditions for convexity, coercivity, and supercoercivity have recently been furnished. Motivated by these advances, we introduce in the present paper the generalized left and right envelopes and proximity operators, and we provide asymptotic results for parameters. Certain results extend readily from the more specific Bregman context, while others only extend for certain generalized cases. To illustrate, we construct examples from the Bregman generalizing case, together with the natural “extreme” cases that highlight the importance of which generalized Bregman distance is chosen. © 2021, The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature

    Zero duality gap conditions via abstract convexity

    Get PDF
    vital:16776Using tools provided by the theory of abstract convexity, we extend conditions for zero duality gap to the context of non-convex and nonsmooth optimization. Mimicking the classical setting, an abstract convex function is the upper envelope of a family of abstract affine functions (being conventional vertical translations of the abstract linear functions). We establish new conditions for zero duality gap under no topological assumptions on the space of abstract linear functions. In particular, we prove that the zero duality gap property can be fully characterized in terms of an inclusion involving (abstract) (Formula presented.) -subdifferentials. This result is new even for the classical convex setting. Endowing the space of abstract linear functions with the topology of pointwise convergence, we extend several fundamental facts of functional/convex analysis. This includes (i) the classical Banach–Alaoglu–Bourbaki theorem (ii) the subdifferential sum rule, and (iii) a constraint qualification for zero duality gap which extends a fact established by Borwein, Burachik and Yao (2014) for the conventional convex case. As an application, we show with a specific example how our results can be exploited to show zero duality for a family of non-convex, non-differentiable problems. © 2021 Informa UK Limited, trading as Taylor & Francis Group

    An update rule and a convergence result for a penalty function method

    Get PDF
    We use a primal-dual scheme to devise a new update rule for a penalty function method applicable to general optimization problems, including nonsmooth and nonconvex ones. The update rule we introduce uses dual information in a simple way. Numerical test problems show that our update rule has certain advantages over the classical one. We study the relationship between exact penalty parameters and dual solutions. Under the differentiability of the dual function at the least exact penalty parameter, we establish convergence of the minimizers of the sequential penalty functions to a solution of the original problem. Numerical experiments are then used to illustrate some of the theoretical results.C

    A primal--dual algorithm as applied to optimal control problems

    Full text link
    We propose a primal--dual technique that applies to infinite dimensional equality constrained problems, in particular those arising from optimal control. As an application of our general framework, we solve a control-constrained double integrator optimal control problem and the challenging control-constrained free flying robot optimal control problem by means of our primal--dual scheme. The algorithm we use is an epsilon-subgradient method that can also be interpreted as a penalty function method. We provide extensive comparisons of our approach with a traditional numerical approach

    Douglas--Rachford algorithm for control-constrained minimum-energy control problems

    Full text link
    Splitting and projection-type algorithms have been applied to many optimization problems due to their simplicity and efficiency, but the application of these algorithms to optimal control is less common. In this paper we utilize the Douglas--Rachford (DR) algorithm to solve control-constrained minimum-energy optimal control problems. Instead of the traditional approach where one discretizes the problem and solves it using large-scale finite-dimensional numerical optimization techniques we split the problem in two subproblems and use the DR algorithm to find an optimal point in the intersection of the solution sets of these two subproblems hence giving a solution to the original problem. We derive general expressions for the projections and propose a numerical approach. We obtain analytic closed-form expressions for the projectors of pure, under-, critically- and over-damped harmonic oscillators. We illustrate the working of our approach to solving not only these example problems but also a challenging machine tool manipulator problem. Through numerical case studies, we explore and propose desirable ranges of values of an algorithmic parameter which yield smaller number of iterations
    • …
    corecore