119,856 research outputs found
A quasi-Newton proximal splitting method
A new result in convex analysis on the calculation of proximity operators in
certain scaled norms is derived. We describe efficient implementations of the
proximity calculation for a useful class of functions; the implementations
exploit the piece-wise linear nature of the dual problem. The second part of
the paper applies the previous result to acceleration of convex minimization
problems, and leads to an elegant quasi-Newton method. The optimization method
compares favorably against state-of-the-art alternatives. The algorithm has
extensive applications including signal processing, sparse recovery and machine
learning and classification
Diagonal quasi-Newton updating formula using log-determinant norm
Quasi-Newton method has been widely used in solving unconstrained optimization problems. The popularity of this method is due to the fact that only the gradient of the objective function is required at each iterate. Since second derivatives (Hessian) are not required, quasi-Newton method is sometimes more efficient than the Newton method, especially when the computation of Hessian is expensive. On the other hand, standard quasi-Newton methods required full matrix storage that approximates the (inverse) Hessian. Hence, they may not be suitable to handle problems of large-scale. In this paper, we develop quasi-Newton updating formula diagonally using log-determinant norm such that it satisfies the weaker secant equation. The Lagrange multiplier is approximated using the Newton-Raphson method that is associated with weaker secant relation. An executable code is developed to test the efficiency of the proposed method with some standard conjugate-gradient methods. Numerical results show that the proposed method performs better than the conjugate gradient method
- …