2 research outputs found
Minimax rates of estimation for high-dimensional linear regression over -balls
Consider the standard linear regression model \y = \Xmat \betastar + w,
where \y \in \real^\numobs is an observation vector, \Xmat \in
\real^{\numobs \times \pdim} is a design matrix, \betastar \in \real^\pdim
is the unknown regression vector, and is
additive Gaussian noise. This paper studies the minimax rates of convergence
for estimation of \betastar for \ell_\rpar-losses and in the
-prediction loss, assuming that \betastar belongs to an
\ell_{\qpar}-ball \Ballq(\myrad) for some \qpar \in [0,1]. We show that
under suitable regularity conditions on the design matrix \Xmat, the minimax
error in -loss and -prediction loss scales as \Rq
\big(\frac{\log \pdim}{n}\big)^{1-\frac{\qpar}{2}}. In addition, we provide
lower bounds on minimax risks in \ell_{\rpar}-norms, for all \rpar \in [1,
+\infty], \rpar \neq \qpar. Our proofs of the lower bounds are
information-theoretic in nature, based on Fano's inequality and results on the
metric entropy of the balls \Ballq(\myrad), whereas our proofs of the upper
bounds are direct and constructive, involving direct analysis of least-squares
over \ell_{\qpar}-balls. For the special case , a comparison with
-risks achieved by computationally efficient -relaxations
reveals that although such methods can achieve the minimax rates up to constant
factors, they require slightly stronger assumptions on the design matrix
\Xmat than algorithms involving least-squares over the -ball.Comment: Presented in part at the Allerton Conference on Control,
Communication and Computer, Monticello, IL, October 200
Error Bounds for Generalized Group Sparsity
In high-dimensional statistical inference, sparsity regularizations have
shown advantages in consistency and convergence rates for coefficient
estimation. We consider a generalized version of Sparse-Group Lasso which
captures both element-wise sparsity and group-wise sparsity simultaneously. We
state one universal theorem which is proved to obtain results on consistency
and convergence rates for different forms of double sparsity regularization.
The universality of the results lies in an generalization of various
convergence rates for single regularization cases such as LASSO and group LASSO
and also double regularization cases such as sparse-group LASSO. Our analysis
identifies a generalized norm of -norm, which provides a dual
formulation for our double sparsity regularization.Comment: 23 pages, 2 figures. arXiv admin note: text overlap with
arXiv:2006.0617