2 research outputs found

    Minimax rates of estimation for high-dimensional linear regression over β„“q\ell_q-balls

    Full text link
    Consider the standard linear regression model \y = \Xmat \betastar + w, where \y \in \real^\numobs is an observation vector, \Xmat \in \real^{\numobs \times \pdim} is a design matrix, \betastar \in \real^\pdim is the unknown regression vector, and w∼N(0,Οƒ2I)w \sim \mathcal{N}(0, \sigma^2 I) is additive Gaussian noise. This paper studies the minimax rates of convergence for estimation of \betastar for \ell_\rpar-losses and in the β„“2\ell_2-prediction loss, assuming that \betastar belongs to an \ell_{\qpar}-ball \Ballq(\myrad) for some \qpar \in [0,1]. We show that under suitable regularity conditions on the design matrix \Xmat, the minimax error in β„“2\ell_2-loss and β„“2\ell_2-prediction loss scales as \Rq \big(\frac{\log \pdim}{n}\big)^{1-\frac{\qpar}{2}}. In addition, we provide lower bounds on minimax risks in \ell_{\rpar}-norms, for all \rpar \in [1, +\infty], \rpar \neq \qpar. Our proofs of the lower bounds are information-theoretic in nature, based on Fano's inequality and results on the metric entropy of the balls \Ballq(\myrad), whereas our proofs of the upper bounds are direct and constructive, involving direct analysis of least-squares over \ell_{\qpar}-balls. For the special case q=0q = 0, a comparison with β„“2\ell_2-risks achieved by computationally efficient β„“1\ell_1-relaxations reveals that although such methods can achieve the minimax rates up to constant factors, they require slightly stronger assumptions on the design matrix \Xmat than algorithms involving least-squares over the β„“0\ell_0-ball.Comment: Presented in part at the Allerton Conference on Control, Communication and Computer, Monticello, IL, October 200

    Error Bounds for Generalized Group Sparsity

    Full text link
    In high-dimensional statistical inference, sparsity regularizations have shown advantages in consistency and convergence rates for coefficient estimation. We consider a generalized version of Sparse-Group Lasso which captures both element-wise sparsity and group-wise sparsity simultaneously. We state one universal theorem which is proved to obtain results on consistency and convergence rates for different forms of double sparsity regularization. The universality of the results lies in an generalization of various convergence rates for single regularization cases such as LASSO and group LASSO and also double regularization cases such as sparse-group LASSO. Our analysis identifies a generalized norm of Ο΅\epsilon-norm, which provides a dual formulation for our double sparsity regularization.Comment: 23 pages, 2 figures. arXiv admin note: text overlap with arXiv:2006.0617
    corecore