Sparse Quantile Regression

Abstract

We consider both β„“0\ell_{0}-penalized and β„“0\ell_{0}-constrained quantile regression estimators. For the β„“0\ell_{0}-penalized estimator, we derive an exponential inequality on the tail probability of excess quantile prediction risk and apply it to obtain non-asymptotic upper bounds on the mean-square parameter and regression function estimation errors. We also derive analogous results for the β„“0\ell_{0}-constrained estimator. The resulting rates of convergence are nearly minimax-optimal and the same as those for β„“1\ell_{1}-penalized estimators. Further, we characterize expected Hamming loss for the β„“0\ell_{0}-penalized estimator. We implement the proposed procedure via mixed integer linear programming and also a more scalable first-order approximation algorithm. We illustrate the finite-sample performance of our approach in Monte Carlo experiments and its usefulness in a real data application concerning conformal prediction of infant birth weights (with nβ‰ˆ103n\approx 10^{3} and up to p>103p>10^{3}). In sum, our β„“0\ell_{0}-based method produces a much sparser estimator than the β„“1\ell_{1}-penalized approach without compromising precision.Comment: 45 pages, 3 figures, 2 table

    Similar works

    Full text

    thumbnail-image

    Available Versions