1,042 research outputs found

    A Path Algorithm for Constrained Estimation

    Full text link
    Many least squares problems involve affine equality and inequality constraints. Although there are variety of methods for solving such problems, most statisticians find constrained estimation challenging. The current paper proposes a new path following algorithm for quadratic programming based on exact penalization. Similar penalties arise in l1l_1 regularization in model selection. Classical penalty methods solve a sequence of unconstrained problems that put greater and greater stress on meeting the constraints. In the limit as the penalty constant tends to ∞\infty, one recovers the constrained solution. In the exact penalty method, squared penalties are replaced by absolute value penalties, and the solution is recovered for a finite value of the penalty constant. The exact path following method starts at the unconstrained solution and follows the solution path as the penalty constant increases. In the process, the solution path hits, slides along, and exits from the various constraints. Path following in lasso penalized regression, in contrast, starts with a large value of the penalty constant and works its way downward. In both settings, inspection of the entire solution path is revealing. Just as with the lasso and generalized lasso, it is possible to plot the effective degrees of freedom along the solution path. For a strictly convex quadratic program, the exact penalty algorithm can be framed entirely in terms of the sweep operator of regression analysis. A few well chosen examples illustrate the mechanics and potential of path following.Comment: 26 pages, 5 figure

    Adaptive Relaxed ADMM: Convergence Theory and Practical Implementation

    Full text link
    Many modern computer vision and machine learning applications rely on solving difficult optimization problems that involve non-differentiable objective functions and constraints. The alternating direction method of multipliers (ADMM) is a widely used approach to solve such problems. Relaxed ADMM is a generalization of ADMM that often achieves better performance, but its efficiency depends strongly on algorithm parameters that must be chosen by an expert user. We propose an adaptive method that automatically tunes the key algorithm parameters to achieve optimal performance without user oversight. Inspired by recent work on adaptivity, the proposed adaptive relaxed ADMM (ARADMM) is derived by assuming a Barzilai-Borwein style linear gradient. A detailed convergence analysis of ARADMM is provided, and numerical results on several applications demonstrate fast practical convergence.Comment: CVPR 201

    Exact Penalization and Necessary Optimality Conditions for Multiobjective Optimization Problems with Equilibrium Constraints

    Get PDF
    A calmness condition for a general multiobjective optimization problem with equilibrium constraints is proposed. Some exact penalization properties for two classes of multiobjective penalty problems are established and shown to be equivalent to the calmness condition. Subsequently, a Mordukhovich stationary necessary optimality condition based on the exact penalization results is obtained. Moreover, some applications to a multiobjective optimization problem with complementarity constraints and a multiobjective optimization problem with weak vector variational inequality constraints are given
    • …
    corecore