159 research outputs found
Regularized Newton Method with Global Convergence
We present a Newton-type method that converges fast from any initialization
and for arbitrary convex objectives with Lipschitz Hessians. We achieve this by
merging the ideas of cubic regularization with a certain adaptive
Levenberg--Marquardt penalty. In particular, we show that the iterates given by
, where is a constant, converge
globally with a rate. Our method is the first
variant of Newton's method that has both cheap iterations and provably fast
global convergence. Moreover, we prove that locally our method converges
superlinearly when the objective is strongly convex. To boost the method's
performance, we present a line search procedure that does not need
hyperparameters and is provably efficient.Comment: 21 pages, 2 figure
Finite-sample analysis of M-estimators using self-concordance
The classical asymptotic theory for parametric -estimators guarantees that, in the limit of infinite sample size, the excess risk has a chi-square type distribution, even in the misspecified case. We demonstrate how self-concordance of the loss allows to characterize the critical sample size sufficient to guarantee a chi-square type in-probability bound for the excess risk. Specifically, we consider two classes of losses: (i) self-concordant losses in the classical sense of Nesterov and Nemirovski, i.e., whose third derivative is uniformly bounded with the power of the second derivative; (ii) pseudo self-concordant losses, for which the power is removed. These classes contain losses corresponding to several generalized linear models, including the logistic loss and pseudo-Huber losses. Our basic result under minimal assumptions bounds the critical sample size by where the parameter dimension and the effective dimension that accounts for model misspecification. In contrast to the existing results, we only impose local assumptions that concern the population risk minimizer . Namely, we assume that the calibrated design, i.e., design scaled by the square root of the second derivative of the loss, is subgaussian at . Besides, for type-ii losses we require boundedness of a certain measure of curvature of the population risk at .Our improved result bounds the critical sample size from above as under slightly stronger assumptions. Namely, the local assumptions must hold in the neighborhood of given by the Dikin ellipsoid of the population risk. Interestingly, we find that, for logistic regression with Gaussian design, there is no actual restriction of conditions: the subgaussian parameter and curvature measure remain near-constant over the Dikin ellipsoid. Finally, we extend some of these results to -penalized estimators in high dimensions
PROMISE: Preconditioned Stochastic Optimization Methods by Incorporating Scalable Curvature Estimates
This paper introduces PROMISE (econditioned Stochastic
ptimization ethods by ncorporating
calable Curvature stimates), a suite of sketching-based
preconditioned stochastic gradient algorithms for solving large-scale convex
optimization problems arising in machine learning. PROMISE includes
preconditioned versions of SVRG, SAGA, and Katyusha; each algorithm comes with
a strong theoretical analysis and effective default hyperparameter values. In
contrast, traditional stochastic gradient methods require careful
hyperparameter tuning to succeed, and degrade in the presence of
ill-conditioning, a ubiquitous phenomenon in machine learning. Empirically, we
verify the superiority of the proposed algorithms by showing that, using
default hyperparameter values, they outperform or match popular tuned
stochastic gradient optimizers on a test bed of ridge and logistic
regression problems assembled from benchmark machine learning repositories. On
the theoretical side, this paper introduces the notion of quadratic regularity
in order to establish linear convergence of all proposed methods even when the
preconditioner is updated infrequently. The speed of linear convergence is
determined by the quadratic regularity ratio, which often provides a tighter
bound on the convergence rate compared to the condition number, both in theory
and in practice, and explains the fast global linear convergence of the
proposed methods.Comment: 127 pages, 31 Figure
Extra-Newton: A First Approach to Noise-Adaptive Accelerated Second-Order Methods
This work proposes a universal and adaptive second-order method for
minimizing second-order smooth, convex functions. Our algorithm achieves
convergence when the oracle feedback is stochastic with
variance , and improves its convergence to with
deterministic oracles, where is the number of iterations. Our method also
interpolates these rates without knowing the nature of the oracle apriori,
which is enabled by a parameter-free adaptive step-size that is oblivious to
the knowledge of smoothness modulus, variance bounds and the diameter of the
constrained set. To our knowledge, this is the first universal algorithm with
such global guarantees within the second-order optimization literature.Comment: 32 pages, 4 figures, accepted at NeurIPS 202
- …