1,694 research outputs found

    Second-order subdifferential calculus with applications to tilt stability in optimization

    Get PDF
    The paper concerns the second-order generalized differentiation theory of variational analysis and new applications of this theory to some problems of constrained optimization in finitedimensional spaces. The main attention is paid to the so-called (full and partial) second-order subdifferentials of extended-real-valued functions, which are dual-type constructions generated by coderivatives of frst-order subdifferential mappings. We develop an extended second-order subdifferential calculus and analyze the basic second-order qualification condition ensuring the fulfillment of the principal secondorder chain rule for strongly and fully amenable compositions. The calculus results obtained in this way and computing the second-order subdifferentials for piecewise linear-quadratic functions and their major specifications are applied then to the study of tilt stability of local minimizers for important classes of problems in constrained optimization that include, in particular, problems of nonlinear programming and certain classes of extended nonlinear programs described in composite terms

    A Path Algorithm for Constrained Estimation

    Full text link
    Many least squares problems involve affine equality and inequality constraints. Although there are variety of methods for solving such problems, most statisticians find constrained estimation challenging. The current paper proposes a new path following algorithm for quadratic programming based on exact penalization. Similar penalties arise in l1l_1 regularization in model selection. Classical penalty methods solve a sequence of unconstrained problems that put greater and greater stress on meeting the constraints. In the limit as the penalty constant tends to ∞\infty, one recovers the constrained solution. In the exact penalty method, squared penalties are replaced by absolute value penalties, and the solution is recovered for a finite value of the penalty constant. The exact path following method starts at the unconstrained solution and follows the solution path as the penalty constant increases. In the process, the solution path hits, slides along, and exits from the various constraints. Path following in lasso penalized regression, in contrast, starts with a large value of the penalty constant and works its way downward. In both settings, inspection of the entire solution path is revealing. Just as with the lasso and generalized lasso, it is possible to plot the effective degrees of freedom along the solution path. For a strictly convex quadratic program, the exact penalty algorithm can be framed entirely in terms of the sweep operator of regression analysis. A few well chosen examples illustrate the mechanics and potential of path following.Comment: 26 pages, 5 figure
    • …
    corecore