4,192 research outputs found

    Second-order subdifferential calculus with applications to tilt stability in optimization

    Get PDF
    The paper concerns the second-order generalized differentiation theory of variational analysis and new applications of this theory to some problems of constrained optimization in finitedimensional spaces. The main attention is paid to the so-called (full and partial) second-order subdifferentials of extended-real-valued functions, which are dual-type constructions generated by coderivatives of frst-order subdifferential mappings. We develop an extended second-order subdifferential calculus and analyze the basic second-order qualification condition ensuring the fulfillment of the principal secondorder chain rule for strongly and fully amenable compositions. The calculus results obtained in this way and computing the second-order subdifferentials for piecewise linear-quadratic functions and their major specifications are applied then to the study of tilt stability of local minimizers for important classes of problems in constrained optimization that include, in particular, problems of nonlinear programming and certain classes of extended nonlinear programs described in composite terms

    Stochastic Block Mirror Descent Methods for Nonsmooth and Stochastic Optimization

    Full text link
    In this paper, we present a new stochastic algorithm, namely the stochastic block mirror descent (SBMD) method for solving large-scale nonsmooth and stochastic optimization problems. The basic idea of this algorithm is to incorporate the block-coordinate decomposition and an incremental block averaging scheme into the classic (stochastic) mirror-descent method, in order to significantly reduce the cost per iteration of the latter algorithm. We establish the rate of convergence of the SBMD method along with its associated large-deviation results for solving general nonsmooth and stochastic optimization problems. We also introduce different variants of this method and establish their rate of convergence for solving strongly convex, smooth, and composite optimization problems, as well as certain nonconvex optimization problems. To the best of our knowledge, all these developments related to the SBMD methods are new in the stochastic optimization literature. Moreover, some of our results also seem to be new for block coordinate descent methods for deterministic optimization

    Forward-backward truncated Newton methods for convex composite optimization

    Full text link
    This paper proposes two proximal Newton-CG methods for convex nonsmooth optimization problems in composite form. The algorithms are based on a a reformulation of the original nonsmooth problem as the unconstrained minimization of a continuously differentiable function, namely the forward-backward envelope (FBE). The first algorithm is based on a standard line search strategy, whereas the second one combines the global efficiency estimates of the corresponding first-order methods, while achieving fast asymptotic convergence rates. Furthermore, they are computationally attractive since each Newton iteration requires the approximate solution of a linear system of usually small dimension
    • …
    corecore