18 research outputs found

    Stability and Deviation Optimal Risk Bounds with Convergence Rate O(1/n)O(1/n)

    Full text link
    The sharpest known high probability generalization bounds for uniformly stable algorithms (Feldman, Vondr\'{a}k, 2018, 2019), (Bousquet, Klochkov, Zhivotovskiy, 2020) contain a generally inevitable sampling error term of order Θ(1/n)\Theta(1/\sqrt{n}). When applied to excess risk bounds, this leads to suboptimal results in several standard stochastic convex optimization problems. We show that if the so-called Bernstein condition is satisfied, the term Θ(1/n)\Theta(1/\sqrt{n}) can be avoided, and high probability excess risk bounds of order up to O(1/n)O(1/n) are possible via uniform stability. Using this result, we show a high probability excess risk bound with the rate O(log⁑n/n)O(\log n/n) for strongly convex and Lipschitz losses valid for \emph{any} empirical risk minimization method. This resolves a question of Shalev-Shwartz, Shamir, Srebro, and Sridharan (2009). We discuss how O(log⁑n/n)O(\log n/n) high probability excess risk bounds are possible for projected gradient descent in the case of strongly convex and Lipschitz losses without the usual smoothness assumption.Comment: 12 pages; presented at NeurIP
    corecore