3,205 research outputs found

    A Convex Formulation for Mixed Regression with Two Components: Minimax Optimal Rates

    Full text link
    We consider the mixed regression problem with two components, under adversarial and stochastic noise. We give a convex optimization formulation that provably recovers the true solution, and provide upper bounds on the recovery errors for both arbitrary noise and stochastic noise settings. We also give matching minimax lower bounds (up to log factors), showing that under certain assumptions, our algorithm is information-theoretically optimal. Our results represent the first tractable algorithm guaranteeing successful recovery with tight bounds on recovery errors and sample complexity.Comment: Added results on minimax lower bounds, which match our upper bounds on recovery errors up to log factors. Appeared in the Conference on Learning Theory (COLT), 2014. (JMLR W&CP 35 :560-604, 2014

    Minimax estimation of linear and quadratic functionals on sparsity classes

    Full text link
    For the Gaussian sequence model, we obtain non-asymptotic minimax rates of estimation of the linear, quadratic and the L2-norm functionals on classes of sparse vectors and construct optimal estimators that attain these rates. The main object of interest is the class s-sparse vectors for which we also provide completely adaptive estimators (independent of s and of the noise variance) having only logarithmically slower rates than the minimax ones. Furthermore, we obtain the minimax rates on the Lq-balls where 0 < q < 2. This analysis shows that there are, in general, three zones in the rates of convergence that we call the sparse zone, the dense zone and the degenerate zone, while a fourth zone appears for estimation of the quadratic functional. We show that, as opposed to estimation of the vector, the correct logarithmic terms in the optimal rates for the sparse zone scale as log(d/s^2) and not as log(d/s). For the sparse class, the rates of estimation of the linear functional and of the L2-norm have a simple elbow at s = sqrt(d) (boundary between the sparse and the dense zones) and exhibit similar performances, whereas the estimation of the quadratic functional reveals more complex effects and is not possible only on the basis of sparsity described by the sparsity condition on the vector. Finally, we apply our results on estimation of the L2-norm to the problem of testing against sparse alternatives. In particular, we obtain a non-asymptotic analog of the Ingster-Donoho-Jin theory revealing some effects that were not captured by the previous asymptotic analysis.Comment: 32 page
    • …
    corecore