This paper studies statistical aggregation procedures in the regression
setting. A motivating factor is the existence of many different methods of
estimation, leading to possibly competing estimators. We consider here three
different types of aggregation: model selection (MS) aggregation, convex (C)
aggregation and linear (L) aggregation. The objective of (MS) is to select the
optimal single estimator from the list; that of (C) is to select the optimal
convex combination of the given estimators; and that of (L) is to select the
optimal linear combination of the given estimators. We are interested in
evaluating the rates of convergence of the excess risks of the estimators
obtained by these procedures. Our approach is motivated by recently published
minimax results [Nemirovski, A. (2000). Topics in non-parametric statistics.
Lectures on Probability Theory and Statistics (Saint-Flour, 1998). Lecture
Notes in Math. 1738 85--277. Springer, Berlin; Tsybakov, A. B. (2003). Optimal
rates of aggregation. Learning Theory and Kernel Machines. Lecture Notes in
Artificial Intelligence 2777 303--313. Springer, Heidelberg]. There exist
competing aggregation procedures achieving optimal convergence rates for each
of the (MS), (C) and (L) cases separately. Since these procedures are not
directly comparable with each other, we suggest an alternative solution. We
prove that all three optimal rates, as well as those for the newly introduced
(S) aggregation (subset selection), are nearly achieved via a single
``universal'' aggregation procedure. The procedure consists of mixing the
initial estimators with weights obtained by penalized least squares. Two
different penalties are considered: one of them is of the BIC type, the second
one is a data-dependent ℓ1-type penalty.Comment: Published in at http://dx.doi.org/10.1214/009053606000001587 the
Annals of Statistics (http://www.imstat.org/aos/) by the Institute of
Mathematical Statistics (http://www.imstat.org