2 research outputs found
Trust Region Methods For Nonconvex Stochastic Optimization Beyond Lipschitz Smoothness
In many important machine learning applications, the standard assumption of
having a globally Lipschitz continuous gradient may fail to hold. This paper
delves into a more general -smoothness setting, which gains
particular significance within the realms of deep neural networks and
distributionally robust optimization (DRO). We demonstrate the significant
advantage of trust region methods for stochastic nonconvex optimization under
such generalized smoothness assumption. We show that first-order trust region
methods can recover the normalized and clipped stochastic gradient as special
cases and then provide a unified analysis to show their convergence to
first-order stationary conditions. Motivated by the important application of
DRO, we propose a generalized high-order smoothness condition, under which
second-order trust region methods can achieve a complexity of
for convergence to second-order stationary
points. By incorporating variance reduction, the second-order trust region
method obtains an even better complexity of ,
matching the optimal bound for standard smooth optimization. To our best
knowledge, this is the first work to show convergence beyond the first-order
stationary condition for generalized smooth optimization. Preliminary
experiments show that our proposed algorithms perform favorably compared with
existing methods
Homotopy techniques in linear programming
In this note, we consider the solution of a linear program, using suitably adapted homotopy techniques of nonlinear programming and equation solving that move through the interior of the polytope of feasible solutions. The homotopy is defined by means of a quadratic regularizing term in an appropriate metric. We also briefly discuss algorithmic implications and connections with the affine variant of Karmarkar's method