9 research outputs found

    Optimal robust mean and location estimation via convex programs with respect to any pseudo-norms

    Get PDF
    We consider the problem of robust mean and location estimation w.r.t. any pseudo-norm of the form xRdxS=supvSx\in\mathbb{R}^d\to ||x||_S = \sup_{v\in S} where SS is any symmetric subset of Rd\mathbb{R}^d. We show that the deviation-optimal minimax subgaussian rate for confidence 1δ1-\delta is max(l(Σ1/2S)N,supvSΣ1/2v2log(1/δ)N) \max\left(\frac{l^*(\Sigma^{1/2}S)}{\sqrt{N}}, \sup_{v\in S}||\Sigma^{1/2}v||_2\sqrt{\frac{\log(1/\delta)}{N}}\right) where l(Σ1/2S)l^*(\Sigma^{1/2}S) is the Gaussian mean width of Σ1/2S\Sigma^{1/2}S and Σ\Sigma the covariance of the data (in the benchmark i.i.d. Gaussian case). This improves the entropic minimax lower bound from [Lugosi and Mendelson, 2019] and closes the gap characterized by Sudakov's inequality between the entropy and the Gaussian mean width for this problem. This shows that the right statistical complexity measure for the mean estimation problem is the Gaussian mean width. We also show that this rate can be achieved by a solution to a convex optimization problem in the adversarial and L2L_2 heavy-tailed setup by considering minimum of some Fenchel-Legendre transforms constructed using the Median-of-means principle. We finally show that this rate may also be achieved in situations where there is not even a first moment but a location parameter exists

    ERM and RERM are optimal estimators for regression problems when malicious outliers corrupt the labels

    Full text link
    We study Empirical Risk Minimizers (ERM) and Regularized Empirical Risk Minimizers (RERM) for regression problems with convex and LL-Lipschitz loss functions. We consider a setting where |\cO| malicious outliers contaminate the labels. In that case, under a local Bernstein condition, we show that the L2L_2-error rate is bounded by r_N + AL |\cO|/N, where NN is the total number of observations, rNr_N is the L2L_2-error rate in the non-contaminated setting and AA is a parameter coming from the local Bernstein condition. When rNr_N is minimax-rate-optimal in a non-contaminated setting, the rate r_N + AL|\cO|/N is also minimax-rate-optimal when |\cO| outliers contaminate the label. The main results of the paper can be used for many non-regularized and regularized procedures under weak assumptions on the noise. We present results for Huber's M-estimators (without penalization or regularized by the 1\ell_1-norm) and for general regularized learning problems in reproducible kernel Hilbert spaces when the noise can be heavy-tailed.Comment: 2 figure
    corecore