159 research outputs found

    Regularized Newton Method with Global O(1/k2)O(1/k^2) Convergence

    Full text link
    We present a Newton-type method that converges fast from any initialization and for arbitrary convex objectives with Lipschitz Hessians. We achieve this by merging the ideas of cubic regularization with a certain adaptive Levenberg--Marquardt penalty. In particular, we show that the iterates given by xk+1=xk(2f(xk)+Hf(xk)I)1f(xk)x^{k+1}=x^k - \bigl(\nabla^2 f(x^k) + \sqrt{H\|\nabla f(x^k)\|} \mathbf{I}\bigr)^{-1}\nabla f(x^k), where H>0H>0 is a constant, converge globally with a O(1k2)\mathcal{O}(\frac{1}{k^2}) rate. Our method is the first variant of Newton's method that has both cheap iterations and provably fast global convergence. Moreover, we prove that locally our method converges superlinearly when the objective is strongly convex. To boost the method's performance, we present a line search procedure that does not need hyperparameters and is provably efficient.Comment: 21 pages, 2 figure

    Finite-sample analysis of M-estimators using self-concordance

    Get PDF
    The classical asymptotic theory for parametric MM-estimators guarantees that, in the limit of infinite sample size, the excess risk has a chi-square type distribution, even in the misspecified case. We demonstrate how self-concordance of the loss allows to characterize the critical sample size sufficient to guarantee a chi-square type in-probability bound for the excess risk. Specifically, we consider two classes of losses: (i) self-concordant losses in the classical sense of Nesterov and Nemirovski, i.e., whose third derivative is uniformly bounded with the 3/23/2 power of the second derivative; (ii) pseudo self-concordant losses, for which the power is removed. These classes contain losses corresponding to several generalized linear models, including the logistic loss and pseudo-Huber losses. Our basic result under minimal assumptions bounds the critical sample size by O(ddeff),O(d \cdot d_{\text{eff}}), where dd the parameter dimension and deffd_{\text{eff}} the effective dimension that accounts for model misspecification. In contrast to the existing results, we only impose local assumptions that concern the population risk minimizer θ\theta_*. Namely, we assume that the calibrated design, i.e., design scaled by the square root of the second derivative of the loss, is subgaussian at θ\theta_*. Besides, for type-ii losses we require boundedness of a certain measure of curvature of the population risk at θ\theta_*.Our improved result bounds the critical sample size from above as O(max{deff,dlogd})O(\max\{d_{\text{eff}}, d \log d\}) under slightly stronger assumptions. Namely, the local assumptions must hold in the neighborhood of θ\theta_* given by the Dikin ellipsoid of the population risk. Interestingly, we find that, for logistic regression with Gaussian design, there is no actual restriction of conditions: the subgaussian parameter and curvature measure remain near-constant over the Dikin ellipsoid. Finally, we extend some of these results to 1\ell_1-penalized estimators in high dimensions

    PROMISE: Preconditioned Stochastic Optimization Methods by Incorporating Scalable Curvature Estimates

    Full text link
    This paper introduces PROMISE (Pr\textbf{Pr}econditioned Stochastic O\textbf{O}ptimization M\textbf{M}ethods by I\textbf{I}ncorporating S\textbf{S}calable Curvature E\textbf{E}stimates), a suite of sketching-based preconditioned stochastic gradient algorithms for solving large-scale convex optimization problems arising in machine learning. PROMISE includes preconditioned versions of SVRG, SAGA, and Katyusha; each algorithm comes with a strong theoretical analysis and effective default hyperparameter values. In contrast, traditional stochastic gradient methods require careful hyperparameter tuning to succeed, and degrade in the presence of ill-conditioning, a ubiquitous phenomenon in machine learning. Empirically, we verify the superiority of the proposed algorithms by showing that, using default hyperparameter values, they outperform or match popular tuned stochastic gradient optimizers on a test bed of 5151 ridge and logistic regression problems assembled from benchmark machine learning repositories. On the theoretical side, this paper introduces the notion of quadratic regularity in order to establish linear convergence of all proposed methods even when the preconditioner is updated infrequently. The speed of linear convergence is determined by the quadratic regularity ratio, which often provides a tighter bound on the convergence rate compared to the condition number, both in theory and in practice, and explains the fast global linear convergence of the proposed methods.Comment: 127 pages, 31 Figure

    Extra-Newton: A First Approach to Noise-Adaptive Accelerated Second-Order Methods

    Full text link
    This work proposes a universal and adaptive second-order method for minimizing second-order smooth, convex functions. Our algorithm achieves O(σ/T)O(\sigma / \sqrt{T}) convergence when the oracle feedback is stochastic with variance σ2\sigma^2, and improves its convergence to O(1/T3)O( 1 / T^3) with deterministic oracles, where TT is the number of iterations. Our method also interpolates these rates without knowing the nature of the oracle apriori, which is enabled by a parameter-free adaptive step-size that is oblivious to the knowledge of smoothness modulus, variance bounds and the diameter of the constrained set. To our knowledge, this is the first universal algorithm with such global guarantees within the second-order optimization literature.Comment: 32 pages, 4 figures, accepted at NeurIPS 202
    corecore