719 research outputs found
Robust Estimation of High-Dimensional Mean Regression
Data subject to heavy-tailed errors are commonly encountered in various
scientific fields, especially in the modern era with explosion of massive data.
To address this problem, procedures based on quantile regression and Least
Absolute Deviation (LAD) regression have been devel- oped in recent years.
These methods essentially estimate the conditional median (or quantile)
function. They can be very different from the conditional mean functions when
distributions are asymmetric and heteroscedastic. How can we efficiently
estimate the mean regression functions in ultra-high dimensional setting with
existence of only the second moment? To solve this problem, we propose a
penalized Huber loss with diverging parameter to reduce biases created by the
traditional Huber loss. Such a penalized robust approximate quadratic
(RA-quadratic) loss will be called RA-Lasso. In the ultra-high dimensional
setting, where the dimensionality can grow exponentially with the sample size,
our results reveal that the RA-lasso estimator produces a consistent estimator
at the same rate as the optimal rate under the light-tail situation. We further
study the computational convergence of RA-Lasso and show that the composite
gradient descent algorithm indeed produces a solution that admits the same
optimal rate after sufficient iterations. As a byproduct, we also establish the
concentration inequality for estimat- ing population mean when there exists
only the second moment. We compare RA-Lasso with other regularized robust
estimators based on quantile regression and LAD regression. Extensive
simulation studies demonstrate the satisfactory finite-sample performance of
RA-Lasso
- …