We consider both β0β-penalized and β0β-constrained quantile
regression estimators. For the β0β-penalized estimator, we derive an
exponential inequality on the tail probability of excess quantile prediction
risk and apply it to obtain non-asymptotic upper bounds on the mean-square
parameter and regression function estimation errors. We also derive analogous
results for the β0β-constrained estimator. The resulting rates of
convergence are nearly minimax-optimal and the same as those for
β1β-penalized estimators. Further, we characterize expected Hamming loss
for the β0β-penalized estimator. We implement the proposed procedure via
mixed integer linear programming and also a more scalable first-order
approximation algorithm. We illustrate the finite-sample performance of our
approach in Monte Carlo experiments and its usefulness in a real data
application concerning conformal prediction of infant birth weights (with
nβ103 and up to p>103). In sum, our β0β-based method
produces a much sparser estimator than the β1β-penalized approach
without compromising precision.Comment: 45 pages, 3 figures, 2 table