Optimal Excess Risk Bounds for Empirical Risk Minimization on pp-norm Linear Regression

Abstract

We study the performance of empirical risk minimization on the pp-norm linear regression problem for p∈(1,∞)p \in (1, \infty). We show that, in the realizable case, under no moment assumptions, and up to a distribution-dependent constant, O(d)O(d) samples are enough to exactly recover the target. Otherwise, for p∈[2,∞)p \in [2, \infty), and under weak moment assumptions on the target and the covariates, we prove a high probability excess risk bound on the empirical risk minimizer whose leading term matches, up to a constant that depends only on pp, the asymptotically exact rate. We extend this result to the case p∈(1,2)p \in (1, 2) under mild assumptions that guarantee the existence of the Hessian of the risk at its minimizer

    Similar works

    Full text

    thumbnail-image

    Available Versions