2,653 research outputs found
SCORE: approximating curvature information under self-concordant regularization
In this paper, we propose the SCORE (self-concordant regularization)
framework for unconstrained minimization problems which incorporates
second-order information in the Newton decrement framework for convex
optimization. We propose the generalized Gauss-Newton with Self-Concordant
Regularization (GGN-SCORE) algorithm that updates the minimization variables
each time it receives a new input batch. The proposed algorithm exploits the
structure of the second-order information in the Hessian matrix, thereby
reducing computational overhead. GGN-SCORE demonstrates how we may speed up
convergence while also improving model generalization for problems that involve
regularized minimization under the SCORE framework. Numerical experiments show
the efficiency of our method and its fast convergence, which compare favorably
against baseline first-order and quasi-Newton methods. Additional experiments
involving non-convex (overparameterized) neural network training problems show
similar convergence behaviour thereby highlighting the promise of the proposed
algorithm for non-convex optimization.Comment: 21 pages, 12 figure
Optimization Methods for Inverse Problems
Optimization plays an important role in solving many inverse problems.
Indeed, the task of inversion often either involves or is fully cast as a
solution of an optimization problem. In this light, the mere non-linear,
non-convex, and large-scale nature of many of these inversions gives rise to
some very challenging optimization problems. The inverse problem community has
long been developing various techniques for solving such optimization tasks.
However, other, seemingly disjoint communities, such as that of machine
learning, have developed, almost in parallel, interesting alternative methods
which might have stayed under the radar of the inverse problem community. In
this survey, we aim to change that. In doing so, we first discuss current
state-of-the-art optimization methods widely used in inverse problems. We then
survey recent related advances in addressing similar challenges in problems
faced by the machine learning community, and discuss their potential advantages
for solving inverse problems. By highlighting the similarities among the
optimization challenges faced by the inverse problem and the machine learning
communities, we hope that this survey can serve as a bridge in bringing
together these two communities and encourage cross fertilization of ideas.Comment: 13 page
Forward-backward truncated Newton methods for convex composite optimization
This paper proposes two proximal Newton-CG methods for convex nonsmooth
optimization problems in composite form. The algorithms are based on a a
reformulation of the original nonsmooth problem as the unconstrained
minimization of a continuously differentiable function, namely the
forward-backward envelope (FBE). The first algorithm is based on a standard
line search strategy, whereas the second one combines the global efficiency
estimates of the corresponding first-order methods, while achieving fast
asymptotic convergence rates. Furthermore, they are computationally attractive
since each Newton iteration requires the approximate solution of a linear
system of usually small dimension
- …