63 research outputs found

    Alternating proximal-gradient steps for (stochastic) nonconvex-concave minimax problems

    Full text link
    Minimax problems of the form minxmaxyΨ(x,y)\min_x \max_y \Psi(x,y) have attracted increased interest largely due to advances in machine learning, in particular generative adversarial networks. These are typically trained using variants of stochastic gradient descent for the two players. Although convex-concave problems are well understood with many efficient solution methods to choose from, theoretical guarantees outside of this setting are sometimes lacking even for the simplest algorithms. In particular, this is the case for alternating gradient descent ascent, where the two agents take turns updating their strategies. To partially close this gap in the literature we prove a novel global convergence rate for the stochastic version of this method for finding a critical point of g():=maxyΨ(,y)g(\cdot) := \max_y \Psi(\cdot,y) in a setting which is not convex-concave

    A variable smoothing algorithm for solving convex optimization problems

    Get PDF
    Abstract. In this article we propose a method for solving unconstrained optimization problems with convex and Lipschitz continuous objective functions. By making use of the Moreau envelopes of the functions occurring in the objective, we smooth the latter to a convex and differentiable function with Lipschitz continuous gradient by using both variable and constant smoothing parameters. The resulting problem is solved via an accelerated first-order method and this allows us to recover approximately the optimal solutions to the initial optimization problem with a rate of convergence of order O

    Enlargements of positive sets

    Get PDF
    In this paper we introduce the notion of enlargement of a positive set in SSD spaces. To a maximally positive set AA we associate a family of enlargements \E(A) and characterize the smallest and biggest element in this family with respect to the inclusion relation. We also emphasize the existence of a bijection between the subfamily of closed enlargements of \E(A) and the family of so-called representative functions of AA. We show that the extremal elements of the latter family are two functions recently introduced and studied by Stephen Simons. In this way we extend to SSD spaces some former results given for monotone and maximally monotone sets in Banach spaces.Comment: 16 page

    Closedness type regularity conditions for surjectivity results involving the sum of two maximal monotone operators

    Full text link
    In this note we provide regularity conditions of closedness type which guarantee some surjectivity results concerning the sum of two maximal monotone operators by using representative functions. The first regularity condition we give guarantees the surjectivity of the monotone operator S(+p)+T()S(\cdot + p)+T(\cdot), where pXp\in X and SS and TT are maximal monotone operators on the reflexive Banach space XX. Then, this is used to obtain sufficient conditions for the surjectivity of S+TS+T and for the situation when 00 belongs to the range of S+TS+T. Several special cases are discussed, some of them delivering interesting byproducts.Comment: 11 pages, no figure
    corecore